The European Union’s landmark artificial intelligence law officially entered into force in August 2024, marking a significant step towards regulating the technology. The EU AI Act bans certain applications of AI deemed as posing “unacceptable risk” to citizens, including social scoring systems and real-time facial recognition.
Companies that fail to comply with the restrictions can face fines of up to 35 million euros ($35.8 million) or 7% of their global annual revenues – whichever amount is higher. The EU’s strict digital privacy law, GDPR, offers lower penalties, ranging from 20 million euros to 4% of annual global turnover for breaches.
While the AI Act still has some limitations, tech experts argue it is “very much needed” and will set a standard for trustworthy AI. Tasos Stampelos, head of EU public policy at Mozilla, says the law is “predominantly a product safety legislation” that requires ongoing compliance and development.
The EU has also established an AI office to regulate model use in accordance with the AI Act. This includes a second-draft code of practice for general-purpose AI models, which sets exemptions for open-source AI providers but requires rigorous risk assessments for developers of “systemic” GPAI models.
Some tech executives worry that the AI Act’s burdensome aspects might stifle innovation. However, others believe that clear rules will give Europe a leadership advantage in developing trustworthy AI models. The EU’s focus on regulation could ultimately make it easier to build and deploy AI systems that prioritize human oversight, bias detection, and regular risk assessments.
Source: https://www.cnbc.com/2025/02/03/eu-kicks-off-landmark-ai-act-enforcement-as-first-restrictions-apply.html