EU Unveils Strict AI Regulations with Fines of Up to €35 Million

The European Union’s comprehensive AI regulatory framework, the AI Act, officially came into force on August 1. As part of this regulation, regulators can ban the use of AI systems deemed to pose an “unacceptable risk” or harm to individuals. The first compliance deadline falls on February 2.

Under the EU’s approach, AI applications are categorized into four broad risk levels: minimal, limited, high, and unacceptable. Applications with high or unacceptable risks face heavy regulatory oversight, while those with limited risks have a light-touch approach. Unacceptable activities include social scoring, emotional manipulation, and biometric exploitation.

Companies that use these AI applications in the EU will be subject to fines of up to €35 million or 7% of their annual revenue, whichever is greater. However, the fines won’t take effect until August.

Some notable tech giants, including Amazon, Google, and OpenAI, have signed the EU AI Pact, a voluntary pledge to apply the principles of the AI Act ahead of its entry into force. Other companies, such as Meta and Apple, have opted not to sign.

The European Commission plans to release additional guidelines in early 2025, but these have yet to be published. It’s unclear how other laws on the books will interact with the AI Act’s prohibitions and related provisions, highlighting the need for organizations to understand how these laws fit together.

Source: https://techcrunch.com/2025/02/02/ai-systems-with-unacceptable-risk-are-now-banned-in-the-eu