The uncertainty surrounding today’s decision on the regulation of Artificial Intelligence (AI) in the European Union (EU) was resolved as France, a notable skeptic, joined other nations in approving technical details during a recent meeting of EU ambassadors.
France, initially hesitant about binding obligations for foundational models like ChatGPT, expressed reservations about transparency requirements and trade secrets. But, everyone agreed! The ambassadors gave a thumbs up, making it a big moment in how the EU deals with AI rules.
The European Commission’s risk-based approach to AI faced challenges in 2022, triggered by global debates sparked by OpenAI’s ChatGPT. The European Parliament decided to add more rules to make sure fundamental rights are respected, especially for foundational models, which weren’t included in the original plan.
Germany, France and Italy presented a counter-proposal supporting “mandatory self-regulation through codes of conduct” for foundational models. With today’s approval, the European Parliament is expected to vote in mid-February, followed by plenary sessions in March or April. The AI Act is likely to take effect later this year, with a 36-month implementation period and initial AI model requirements after one year.
The AI Act categorizes AI systems into four groups based on societal risk. High-risk systems face strict pre-market rules and post-market oversight by national authorities and the EU’s AI office. Minimal-risk systems are exempt from additional rules, while those with limited risk must adhere to basic transparency obligations.
This choice is a big step to make strong rules for AI in the EU. It deals with worries and gets things ready for a careful and safe way to use AI.