The European Union has officially enacted its landmark AI Act, which entered into force in August 2024. This pioneering legislation aims to regulate artificial intelligence technologies, particularly addressing applications deemed to pose “unacceptable risks” to citizens. The recent deadline for prohibitions on certain AI systems and requirements for staff technology literacy lapsed on Sunday, marking a significant milestone in the EU’s regulatory landscape.
Focus on Safety and Compliance
The AI Act is the first of its kind globally, establishing a framework that prioritizes product safety for artificial intelligence applications. According to Tasos Stampelos, the head of EU public policy and government relations at Mozilla, the legislation primarily focuses on ensuring that AI technologies do not jeopardize public safety. Companies that violate the provisions of the AI Act could face substantial penalties, including fines up to 35 million euros ($35.8 million) or 7% of their global annual revenues, whichever is higher.
The EU AI Office, an organization created to oversee compliance with the AI Act, will regulate the use of AI models in accordance with this new legislation. In December, the office published a second draft of its code of practice for general-purpose AI (GPAI) models, which encompasses systems like OpenAI’s GPT family of large language models. The Act mandates that developers of “systemic” GPAI models undergo rigorous risk assessments to ensure safe deployment.
Exemptions are included in the second draft for providers of certain open-source AI models. This move attempts to balance innovation with regulation while addressing concerns from various stakeholders. In June 2024, Prince Constantijn of the Netherlands voiced apprehensions about Europe’s stringent focus on regulating AI. He noted, “It’s good to have guardrails. We want to bring clarity to the market, predictability and all that. But it’s very hard to do that in such a fast-moving space.”
Despite some criticisms, many experts argue that the EU AI Act is necessary for fostering responsible AI development. Diyan Bogdanov stated, “While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones.” He further emphasized that the Act’s requirements—such as bias detection, regular risk assessments, and human oversight—are not inhibitors of innovation but rather set standards for responsible AI practices.
As enforcement begins, companies across Europe will need to adapt quickly to the new regulations. Fines for breaches of the General Data Protection Regulation (GDPR) can reach up to 20 million euros or 4% of annual global turnover, adding another layer of financial risk for non-compliance.
What The Author Thinks
The European Union’s AI Act represents a crucial step toward establishing a regulatory framework that balances the need for technological innovation with the imperative of safeguarding public welfare. By setting strict guidelines for AI development and deployment, the EU is taking a proactive approach to manage the risks associated with these powerful technologies. This legislation could serve as a model for other regions, promoting a global standard for the ethical use of artificial intelligence.
Featured image credit: Jean-Etienne Minh-Duy Poirrier via Flickr
Follow us for more breaking news on DMR