top of page

Update On the Long Road to an EU AI Regulation Act

12/24/23

Editorial team at Bits with Brains

After almost a year of debate and negotiations, the European Union finally reached a provisional agreement last week on landmark legislation to regulate artificial intelligence.

After almost a year of debate and negotiations, the European Union finally reached a provisional agreement last week on landmark legislation to regulate artificial intelligence. The so-called EU AI Act aims to establish clear rules and obligations for the development, deployment, and use of AI systems based on the risks they pose.


While the final text is still being finalized, the agreement lays the foundation for the world's first comprehensive legal framework for trustworthy AI. It is expected to set precedents that could influence AI policy far beyond Europe. So, what are the key elements businesses need to know?


The Act defines AI broadly


The EU AI Act classifies AI systems that are developed using machine learning approaches, logic- and knowledge-based, or statistical approaches. This broad, technology-neutral definition is meant to future-proof the regulation and cover any system that could automate decision making and have significant impacts.


The Act identifies four categories of risk


At the heart of the Act is a risk-based approach that divides AI systems into four categories based on the level of risk they pose: minimal risk, limited risk, elevated risk, and unacceptable risk. Each category has specific transparency and compliance requirements. For example, high-risk systems like self-driving cars or AI diagnosis tools will face assessments, documentation, human oversight rules and more.


The Act bans Social Scoring and Other Uses


The Act completely prohibits certain AI applications deemed unacceptable risks, such as systems using subliminal techniques to manipulate behavior or scoring of social status and trustworthiness of citizens. Law enforcement use of biometric identification systems in public spaces will also face limitations.


The Act provides special rules for large language models


The negotiations almost broke down over how to regulate large language models like ChatGPT. There were concerns strict rules could disadvantage European AI companies. The final agreement takes a nuanced approach: allowing some self-regulation while still requiring transparency measures for systems deemed high impact or high risk.


Implementation


The Act is expected to take effect in early 2024 after final adoption. But many provisions phase in over 18-24 months. Key open questions remain around enforcement details and resources needed to operationalize this complex regulation spanning multiple sectors and technologies. But the agreement signals Europe's ambitions to lead the way globally on trustworthy AI.


Sources: 

https://www.washingtonpost.com/technology/2023/12/08/ai-act-regulation-eu/

https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Sources

bottom of page