EU approves world's most comprehensive set of AI rules
European Union lawmakers gave final approval to the 27-nation bloc’s artificial intelligence law Wednesday, putting the world-leading rules on track to take effect later this year, AP reports.
Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed.
The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.
“The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote.
Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence.
The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.
High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.
Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.
Other banned uses include police scanning faces in public using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.
AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated.
There’s extra scrutiny for the biggest and most powerful AI models that pose “systemic risks,” which include OpenAI’s GPT4 — its most advanced system — and Google’s Gemini.
The EU says it’s worried that these powerful AI systems could “cause serious accidents or be misused for far-reaching cyberattacks.” They also fear generative AI could spread “harmful biases” across many applications, affecting many people.
Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone’s death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use.