
The EU AI Act, which prohibits AI systems that are considered to present “unacceptable risks” (e.g., manipulative systems, social scoring, and real-time biometric identification), went into effect on February 2, 2025. The ban will be strictly enforced and implemented in phases to ensure a fair balance between innovation and safety.
The EU AI Act’s first requirements are now legally binding. These include banning certain “unacceptable” uses of AI and making sure that people who work for companies that provide or use AI are literate enough.
The first date for following the Act, a unique set of rules for regulating AI that went into effect in August 2024, passed on February 2, 2025. It talks about the risks of AI and wants to make Europe a leader in the development of the technology. It does this by giving AI developers and deployers a clear set of rules for how to build, use, and apply AI.
Now that the first date for compliance has passed, companies must follow the rules about AI literacy and systems that are “clear threats to the safety, livelihoods, and rights of people,” which are now illegal.
If they don’t or if they break the rules again, they could be fined up to 35 million euros, or 7% of their world annual revenue, whichever is higher.
What is the Artificial Intelligence Act?
The AI Act guarantees that Europeans can rely on AI and is integral to a broader set of policy initiatives promoting the advancement of reliable AI in the EU, encompassing research, collaboration, investment, and legislation.
The rapid advancement and innovation in AI in recent years have resulted in adoption surpassing legislation in numerous instances, with enterprises frequently accountable for ensuring the safe and responsible use of AI.
The European Commission states that most AI systems present minimal to no danger and can aid in addressing complex issues and promoting societal advancement. Nevertheless, certain systems pose dangers that, if neglected, may result in adverse effects for individuals and enterprises.
The Act employs a risk-based regulatory system that adjusts the regulation level according to the assessed danger of various AI applications to society.
This framework mandates that, upon market introduction, an AI system must undertake conformity assessments and adhere to EU regulations. Subsequently, it is recorded in the EU’s database, and a statement of conformity is executed. Any significant modifications to the AI system necessitate a re-evaluation of the procedure.
Prohibited AI Applications
The EU AI Act prohibits several categories of AI systems considered to pose unacceptable risks to fundamental rights and safety. These include:
- Manipulative systems using subliminal techniques or exploiting vulnerabilities of specific groups
- Social scoring systems evaluate individuals based on behavior or personality traits
- Untargeted facial recognition databases created by scraping online images or CCTV footage.
- Real-time biometric identification in public spaces, with limited exceptions for law enforcement.
- AI inferring emotions in educational or workplace settings.
- Systems predicting criminal behavior based solely on profiling or personality assessment.
These prohibitions aim to protect citizens from potential harm and ensure AI development aligns with EU values and human rights standards.
Efficient regulation: A risk-oriented methodology
The AI Act categorizes risks into four classifications: minimum or negligible risk, limited risk, high risk, and unacceptable risk.High-risk applications are subject to rigorous requirements, encompassing comprehensive risk assessment and mitigation strategies, severe data quality standards to reduce the likelihood of biased results, and elevated levels of robustness, cybersecurity, and precision.
Some examples of high-risk use cases are critical infrastructure, medical applications, hiring and recruitment, education, and important private and public services like credit scoring and law enforcement.
The new rules, which went into effect on February 2, are for AI systems that are thought to pose the greatest risk, or “unacceptable” risk.
This is what the AI Act doesn’t allow:
- A type of AI that takes advantage of weaknesses like old age, low income, or disability.
- AI that makes decisions using methods that are meant to trick or manipulate people.
- AI that can tell from a person’s look if they are likely to commit a crime.
- AI is used for social ranking.
- AI that scrapes facial pictures from the internet or CCTV without a specific purpose.
- Artificial intelligence that tries to guess how people are feeling in schools and at work.
- AI that figures out what people are like by looking at their data.
- There is AI that gathers real-time biometric data in public places for law enforcement reasons.
Generative AI is classified as a type of ‘general-purpose’ AI in the legislation. This classification includes technologies engineered to execute a diverse range of tasks at a level equal to or beyond human abilities.
The Act mandates that such AI systems comply with EU copyright law, provide transparency disclosures about model training methodologies, and establish sufficient cybersecurity measures. The regulations will take effect on 2 August 2025.
The regulations for high-risk AI systems integrated into regulated products have an extended transition time until 2 August 2027.