Dive into the New EU AI Regulation: Understanding its Impact on Artificial Intelligence and Your Safety
The rapid advancement of artificial intelligence (AI) has a profound impact on our daily lives, and Europe recognizes its pivotal role in shaping the region’s growth and prosperity. Harnessing the power of data and connected technologies is crucial, that’s why a new EU AI Regulation was developed.
On 14 June 2023, Members of the European Parliament (MEPs) embraced their negotiating stance on the AI Act, marking a significant step towards regulating AI. The upcoming discussions with EU countries in the Council aim to shape the final version of the law, with the objective of reaching an agreement by the end of this year.
The adoption of a comprehensive AI Act by the European Parliament reflects the urgency to strike a balance between leveraging AI’s potential benefits and mitigating potential risks. The Act represents a concerted effort to establish a robust regulatory framework that safeguards the interests of individuals and businesses alike. Get to know more about the EU AI Regulation for medical devices below.
EU AI Regulation: A new approach for AI systems
By setting clear rules and standards, Europe aims to ensure that AI is deployed ethically, responsibly, and transparently. This legislation will not only promote the development of cutting-edge AI technologies but also foster trust and confidence among citizens.
The draft AI Act represents a effort to introduce comprehensive and horizontal regulation of AI. Focused on addressing the specific utilization of AI systems and the associated risks, the proposed legal framework aims to establish a technology-neutral definition of AI systems within EU law. Additionally, it seeks to classify AI systems based on a risk-based approach, enabling tailored requirements and obligations for different categories of AI systems. The emphasis is on human oversight rather than full automation to prevent any detrimental outcomes.
The proposed AI Regulation Act sets out a clear set of objectives to guide its implementation:
- Ensure that AI systems placed on the EU market comply with existing EU laws and uphold safety standards
- Enhance governance and effective enforcement of EU laws pertaining to fundamental rights and safety requirements for AI systems
- Provide legal certainty to foster an environment conducive to investment and innovation
By setting clear obligations for both providers and users of AI systems based on the level of risk, the new rules strike a balance between enabling innovation and mitigating potential harms. Even AI systems with minimal risk are subject to assessment, emphasizing the importance of a thorough evaluation process.
The levels of risk for AI Systems are:
- Unacceptable risk
- High-Risk
- Generative AI
- Limited Risk
AI Regulation for Medical Devices
Medical devices that contain AI systems are listed as “High-Risk”, since they are products falling under the EU’s product safety legislation. As a result, these devices are subject to thorough assessment before entering the market and throughout their entire life cycle. To ensure compliance, providers of high-risk AI systems must:
- Register their products in a centralized database managed by the Commission prior to market placement or service provision
- If the AI medical products and services fall under existing product safety legislation, they should follow the established third-party conformity frameworks
- For AI systems currently not governed by EU legislation, providers must conduct their own conformity assessment to demonstrate compliance with the new requirements for high-risk AI systems and obtain the CE marking
High-risk AI systems are required to meet a comprehensive set of requirements, including aspects such as:
- Risk management
- Testing
- Technical robustness
- Data training and governance
- Transparency
- Human oversight
- Cybersecurity
The focus on risk management, data governance, transparency, and cybersecurity emphasize the importance of responsible AI development and usage, promoting public trust and confidence in these technologies.
Additionally, providers, importers, distributors, and users of such AI systems have a range of obligations to fulfil. Providers based outside the EU must appoint an authorized representative within the Union to oversee conformity assessment, establish a post-market monitoring system, and take corrective actions when necessary. AI systems that conform to the anticipated new harmonized EU standards, currently in development, will benefit from a presumption of conformity with the requirements outlined in the draft AI act.
Eclevar is always updated on regulatory changes for medical devices
Eclevar is an experienced Contract Research Organization, with more than 20 years of helping manufactures follow specific procedures for providing new technologies for the health of patients around the world.
Our team closely monitors the advancements and changes in the regulation process of different agencies and jurisdictions for better advise you on your product pathway. Contact us for more information on the new EU AI Regulation.