EU AI Act: first regulation on artificial intelligence
The EU just passed sweeping new rules to regulate AI. The European Union agreed on terms of the AI Act, a major new set of rules that will govern the building and use of AI and have major implications for Google, OpenAI, ChatGPT, and other racing to develop AI systems.
The 36-hour negotiation marathon that resulted in the finalization of the European Union’s comprehensive AI regulations comes with a catch: until the regulations take effect in 2025, the EU will remain in a legal vacuum. Why this matters Much like it did with digital privacy laws, the EU is once again establishing what may become global regulatory standards for AI, but the transition may not be smooth. The EU was the first country in the world to pass comprehensive AI legislation.
Why this matters: Much like it did with digital privacy laws, the EU is once again establishing what may become global regulatory standards for AI, but the transition may not be smooth. The EU was the first country in the world to pass comprehensive AI legislation.
As the law won’t go into effect until 2025, the EU will encourage businesses to start voluntarily adhering to the regulations in the interim. But if they don’t, there are no consequences.
The break gives the U.S. and other parties plenty of opportunity to undermine EU plans before they are implemented, for example by enacting less stringent regulations before those in Europe take effect.
Senate Majority Leader Charles Schumer has voiced concerns that US laws modeled after those in the EU would disadvantage US companies in the race against China.
The big picture: Prior to ChatGPT’s launch in November 2022 and the subsequent 2023 market explosion for generative AI, European legislators started working on their AI Act.
The EU classifies AI applications into four risk categories, matching higher potential risks with progressively stricter regulations.
Specifics: A number of applications of AI are prohibited by EU law, including the majority of emotion recognition systems and the mass scraping of face images in educational and professional contexts. There are some safety exceptions, like when a driver is detected to be dozing off by AI.
The controversial “social scoring” laws—which assess a citizen’s compliance or reliability—are also prohibited by the new law.
It limits the applications of facial recognition technology in law enforcement to a small number of legitimate purposes, such as identifying victims of human trafficking, terrorism, and kidnapping.
Providers of foundation models will have to provide thorough synopses of the training set that went into creating their models.
Businesses who break the regulations risk fines of 1.5% to 7% of their worldwide sales.
It will be required of system operators producing manipulated media to notify users of this.
Other “high-risk” AI providers will have to comply with reporting regulations, which include disclosing information to public databases and human rights impact assessments, particularly in the context of crucial public services.
These requirements apply to AI applications in elections, employment, critical infrastructure, border control. and education.
The national governments of the EU, France foremost among them, pressed for and succeeded in obtaining legal exemptions for military or defense applications of artificial intelligence.
Context: National governments demanding exemptions for national security and lawmakers defending civil liberties constituted the primary line of division in the negotiations.
In the meantime, when determining whether Microsoft’s partnership with OpenAI breaches antitrust laws, the EU is approaching the matter more leniently than the US or the UK.
Although “the Commission has been following very closely the situation of control over OpenAI,” a spokesman for the European Commission tells Axios that a “change of control on a lasting basis” would be necessary to warrant a Commission investigation.
Speaking for themselves, European Union officials are portraying the law as “a launchpad for EU startups and researchers,” according to European Industry Commissioner Thierry Breton.
The EU, according to Amnesty International, has authorized “dystopian digital surveillance.”
Dig deeper Next on the EU’s agenda for AI: Liability
For more information about recent EU AI regulations, contact Economou & Economou cyberlaw law office at econlaw@live.com or call us at (+30) 2103603824 or fill the contact form