Internet Security Auditors Blog

ISO42001 vs RIA: how to integrate the AI management system into European regulatory frameworks

Written by José Antonio Prieto | May 5, 2026 9:52:48 AM

There is a series of questions that are being repeated more and more in companies’ management committees.

Does our organization need Artificial Intelligence (AI)?
Will we be able to demonstrate that we are doing things right with AI?
What advantages and disadvantages does AI have for our business? 

Any organization that works with AI must know the two major frameworks that are defining the rules of the game.

 The Artificial Intelligence Regulation (AIR) [3] is a European law that establishes a common legal framework for the development, commercialization, and use of AI systems. It has a risk‑based approach: it takes into account unacceptable practices, regulates high‑risk systems, and imposes requirements for safety, transparency, and human oversight, with the aim of protecting fundamental rights, ensuring safety, and at the same time fostering technological innovation within the European market [1]. That is, it defines what must be complied with and is mandatory.

The ISO 42001:2023 Standard [8][2] is an international framework that establishes the requirements to create, implement, maintain, and improve an AI Management System (AIMS) within an organization, with the aim of ensuring that the development and use of AI systems are responsible, reliable, and aligned with ethical and regulatory principles; it is based on the harmonized structure of ISO standards established in Annex SL [4] and on a risk‑management approach, covering aspects such as governance, transparency, human oversight, safety, data quality, and continuous improvement. That is, it defines how to set up an AI management system and is voluntary in nature.


The integration of both should not consist of implementing them separately, but of using the ISO 42001:2023 framework to demonstrate and sustain the compliance required by the AIR. We will see later in this article one of the ways to approach this task, but first let us clarify one of the false ideas that can be observed in quite a few company executives.

The AIR affects your organization because it does not regulate the technology, it regulates the impact of its use.

There is a fairly widespread idea that the AIR is a regulation designed only for those who develop AI systems, and this is not true. Although providers, especially in high‑risk systems, bear very demanding technical obligations, its application also reaches organizations that use third‑party tools, especially when they do so in sensitive environments.

The key lies in the classification. To begin with, there are practices that are directly prohibited. There are also high‑risk systems that require conformity assessment before their commercialization or being put into service. Lastly, there are others that are subject to more limited obligations. In any case, the Regulation clearly distinguishes between the obligations of the provider and those of the deployer. Yes, your organization!

That is why compliance does not begin in theory, but in something much more basic: knowing which AI systems are being used or are going to be used within the organization. Because, as is well known, you cannot control or protect what you do not know you have [9].

Convergence between the Regulation and the Standard

As mentioned before, it is necessary to find the points of intersection between the AIR and the Standard, since if it is done correctly, synergies are generated that can sustain both frameworks with minimal effort. Let us look at some of the moments in which both regulatory frameworks converge:

RIA ISO42001:2023
Article 9
Risk management system

A risk management system shall be established, implemented, documented and maintained in relation to high‑risk AI systems.

The risk management system shall be understood as a continuous iterative process planned and carried out throughout the entire lifecycle of a high‑risk AI system, requiring systematic periodic reviews and updates. It shall consist of the following stages:
[…]
6 Planning
6.1 Actions to address risks and opportunities  
[…]

6.1.2 AI risk assessment  
The organization shall define and establish an AI risk assessment process that:
[…]
These two requirements actually correspond to the same obligation viewed from two different angles. They would generate the same evidence both for the internal auditor and for any regulator.
Article 10
Data and data governance

[…]

2. The training, validation and testing datasets shall be subject to appropriate data‑governance and data‑management practices suitable for the intended purpose of the high‑risk AI system. Such practices shall focus in particular on the following: […]

3. The training, validation and testing datasets shall be relevant, sufficiently representative and, to the greatest extent possible, free of errors and complete in view of their intended purpose. They shall also have appropriate statistical properties, for example, where applicable, with respect to the persons or groups of persons in relation to whom the high‑risk AI system is intended to be used. The datasets may meet these characteristics for each dataset individually or for a combination of them.

[…]

5. Insofar as it is strictly necessary to ensure the detection and correction of biases associated with high‑risk AI systems in accordance with paragraph 2, points (f) and (g), of this Article, providers of such systems may exceptionally process special categories of personal data, provided that they offer appropriate safeguards regarding the fundamental rights and freedoms of natural persons. In addition to the provisions laid down in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all of the following conditions must be met for such processing to take place: […]

Annex A
Data for AI systems

A.7.2 – Data for the development and improvement of the AI system  
The organization shall define, document and implement data‑management processes related to the development of AI systems.

A.7.3 – Data acquisition  
The organization shall determine and document the details regarding the acquisition and selection of the data used in AI systems.

A.7.4 – Data quality for AI systems  
The organization shall define and document the data‑quality requirements and ensure that the data used to develop and operate the AI system meet those requirements.

A.7.5 – Data provenance  
The organization shall define and document a process to record the provenance of the data used in its AI systems throughout the data lifecycle and the AI system lifecycle.

It can be observed that the RIA requires certain criteria for representativeness and bias mitigation, while the controls in Annex 7 provide a methodology that makes it possible to formalize the traceability of the origin, the preparation criteria, and their corresponding validation, all within a procedure that can be reproduced over time.
ANNEX IV
Technical documentation referred to in Article 11(1)

The technical documentation referred to in Article 11(1) shall include at least the following information, applicable to the relevant AI system:

A general description of the AI system, including:
a) its intended purpose, the provider’s name, and the system version in such a way that its relationship with previous versions is reflected;
[…]

A detailed description of the elements of the AI system and of its development process, including:
a) the methods and measures adopted for the development of the AI system […]
b) the system’s design specifications […]

Detailed information on the monitoring, operation, and control of the AI system […]
7 Support
[…]
7.5 Documented information
[…]
7.5.3 Control of documented information

The documented information required by the AI management system and by this document shall be controlled in order to ensure that:

a) it is available and suitable for use, where and when it is needed;
b) it is adequately protected (for example, against loss of confidentiality, improper use, or loss of integrity).
On the one hand, the RIA requires a minimum level of documentation, and on the other hand, ISO 42001:2023 organizes it so that it is available, findable, and kept up to date.

 

As can be seen from these examples, both frameworks complement each other in such a way that whoever knows how to take advantage of them jointly will have many advantages and will save themselves many headaches.

The time has come to integrate them, where to start?

It is important to highlight that the integration of both frameworks can be carried out in various ways. Each organization can adapt the sequence, the processes, and the depth of the controls according to its size, sector, capacity, or maturity in AI management. What is presented below is one of the possible ways, designed to systematically cover both the regulatory compliance of the AIR and the AIMS governance of ISO 42001:2023.

Let us look at it step by step:

1. Identify the scope of AI systems 

a. Inventory all AI systems, both proprietary and third‑party. 
b. Classify them according to risk level: prohibited, high risk, subject to transparency obligations, minimal risk.
c. Record responsible parties: provider, deployer, importer, distributor, etc. 

2. Map AIR regulatory requirements  

a. Analyze AIR articles and technical annexes applicable to each system.
b. Identify obligations by actor and by risk level: transparency, technical documentation, conformity testing, system registration, etc.

3. Map ISO 42001:2023 requirements  

a. Review clauses 4–10 and Annex A controls.
b. Identify processes, policies, and controls that can cover AIR obligations.
c. For example:
  i. ISO performance evaluation → AIR conformity testing.
  ii. ISO data quality control → AIR high‑risk data management.

4. Perform gap analysis (GAP)  

a. Compare AIR requirements with existing ISO processes and controls.
b. Identify where controls, documentation, or objective evidence are missing.

5. Define integration plan and corrective actions  

a. Assign internal responsibilities (compliance, IT, CISO, operations, legal, etc.).
b. Create policies, procedures, and records needed to cover legal requirements.
c. Prioritize actions for high‑risk systems.

6. Update ISO 42001:2023 processes to comply with the AIR  

a. Integrate regulatory requirements into AI management processes.
b. Ensure traceability with AIR obligations linked to the evidence required by ISO (policies, logs, test records, technical documentation).

7. Training and awareness  

a. Train development, operations, and compliance teams on AIR risks and proper use.
b. Ensure everyone understands their role in risk management and compliance.

8. Monitoring and internal audit  

a. Periodically review AIR compliance within the ISO 42001:2023 system.
b. Review the inventory of AI systems used, detect deviations from compliance with both frameworks, document findings, and update controls as needed.

9. Continuous improvement  

a. Adjust processes and controls according to regulatory changes or new interpretations (case law), and review new AI implementations.
b. Incorporate feedback from internal and external audits.
c. Maintain records of objective evidence for inspections or regulatory audits.

Conclusion 

Integrating the AIR and ISO 42001:2023 is not a formal or documentary matter; it is a strategic decision that enables business decisions, from staying competitive in the current market to opening new service lines and participation in tenders. The Regulation sets the playing field and the consequences of non‑compliance. The ISO provides the structure to play with order, method, and traceability, even if it is not certified.

Organizations that understand this convergence now are no longer working on their compliance as a reaction, but by understanding the context and working systematically. They do not improvise when facing an inspection nor rush before an audit, but already have processes, evidence, and clear responsibilities integrated into their daily operations and organizational culture.

In an environment where AI is no longer experimental and where regulatory oversight will increase, the advantage does not lie in having more technology, but in having it governed. Integrating both frameworks is not duplicating efforts; it is turning the legal obligation into sustainable organizational capability.

The question, therefore, is not whether to integrate the AIR and ISO 42001:2023, but when to start doing so in a structured way.

If you want to know where your organization stands with respect to AI or get to work on its adaptation, at Internet Security Auditors we can help you [5][6][7].

References

[1] AESIA. Retrieved from Guide 1. Introduction to the AI Regulation: https://aesia.digital.gob.es/storage/media/01-guia-introductoria-al-reglamento-de-ia-1770802981.pdf

[2] Amaya, J. R. (March 11, 2026). ISEC Auditors. Retrieved from The new era of responsible AI: how to implement an AIMS with ISO/IEC 42001: https://blog.isecauditors.com/implantacion-de-un-sgia-iso-iec-42001-2023

[3] Official Journal of the European Union. (July 12, 2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Artificial Intelligence). Retrieved from https://www.boe.es/buscar/doc.php?id=DOUE-L-2024-81079

[4] International Organization for Standardization. (July 30, 2025). Annex SL Appendix 2 – Harmonized structure for MSS with guidance for use. Retrieved from https://www.iso.org/cms/render/live/en/sites/isoorg/home.isoDocumentsDownload.do?t=2EVmNRpfMEK8NcTL_uoAJceDlxYmmqpQWNk3r1MeLNWCXk6i10vZ-R5FEjIK-UOe

[5] ISEC Auditors. (n.d.). Comprehensive Security Audit for AI Ecosystems. Retrieved from https://www.isecauditors.com/en/comprehensive-security-audit-for-ai-ecosystems

[6] ISEC Auditors. (n.d.). Assessment and Compliance Support in the Use of AI. Retrieved from https://www.isecauditors.com/en/evaluation-and-compliance-support-in-the-use-of-ai

[7] ISEC Auditors. (n.d.). Implementation of ISO 42001 AIMS. Retrieved from https://www.isecauditors.com/en/implementation-of-aims-iso-42001

[8] ISO. (2023). ISO/IEC 42001:2023 Standard – Information technology — Artificial intelligence — Management system.

[9] Prieto, J. A. (September 5, 2025). ISEC Auditors. Retrieved from From “Shadow IT” to “Shadow AI”: the new ghosts of information systems: https://blog.isecauditors.com/en/from-shadow-it-to-shadow-ai-the-new-ghosts-haunting-information-systems