ISO 42001 Annex A
Annex A is the core operational component of ISO/IEC 42001. While the clauses of the standard define high-level requirements for governing artificial intelligence, Annex A translates those requirements into concrete, actionable AI controls that organizations can implement, tailor, or formally justify excluding.
In simple terms, Annex A answers the question: “What specific controls do we put in place to manage AI risks in practice?”
Annex A contains a structured catalogue of AI-specific controls designed to cover the full lifecycle of AI systems. While the exact number may vary slightly depending on interpretation, it is commonly described as around 38 controls grouped into nine thematic areas.
Annex A control families:
Organizations are expected to select relevant controls from each family, based on risk, and document their decisions in the Statement of Applicability (SoA).
1. AI governance and leadership controls: These controls ensure that AI is governed at an organizational level, not just within technical teams. They focus on leadership involvement, decision authority, and alignment between AI use and business objectives.
What organizations are expected to do:
What organizations are expected to do:
- Define who owns AI governance across the organization
- Establish escalation paths for AI-related issues
- Ensure AI initiatives align with organizational values and strategy
- Perform AI risk assessments before deployment
- Conduct AI impact assessments for systems affecting people
- Define mitigation actions for identified risks
What organizations are expected to do:
- Document AI system purpose, logic, and limitations
- Maintain records of data sources and model versions
- Ensure traceability of AI decisions where feasible
- Define who is accountable for AI decisions
- Ensure humans can intervene or override AI outputs
- Prevent fully autonomous operation where inappropriate
- Validate AI models before use
- Test performance against defined criteria
- Document assumptions and limitations
- Define data quality standards
- Track data sources and provenance
- Address bias and relevance in datasets
- Monitor AI performance continuously
- Detect model drift or unexpected behavior
- Ensure systems are resilient and secure
- Define AI incident response processes
- Handle complaints related to AI decisions
- Engage regulators when required
- Conduct internal audits of AI controls
- Review governance effectiveness regularly
- Implement corrective actions
SOC Frameworks Overview
SOC 2 Basics
SOC 2 Compliance Process
SOC 2 Compliance Process
Sprinto: Your ally for all things compliance, risk, governance




