The AI Act prohibits certain AI practices deemed to be unacceptably risky.
February 2, 2025
The placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behavior of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.
The placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
The placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
*Workplace or Education
The placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
The placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behavior or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
*For Inferring Special Categories
The placing on the market, the putting into service for this specific purpose, or the use of biometric categorization systems that categorize individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement.
The placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. This prohibition does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.
*In Public by Law Enforcement
The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
The AI Act imposes additional obligations and restrictions on AI systems creating a high risk to the health and safety or fundamental rights of natural persons.
August 2, 2026:
For designated high-risk AI systems (Annex III) placed on the market after August 2, 2026, or placed on the market before August 2, 2026, that are subject to significant change in their design.
August 2, 2027:
For regulated product and safety component high-risk AI systems (Annex I) placed on the market after August 2, 2026, or placed on the market before August 2, 2026, that are subject to significant change in their design.
Part 1: Regulated Product or Safety Component High-Risk AI
An AI system is high-risk where:
Radio Equipment
Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62).
Cableway Installations
Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1).
Lifts
Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251).
Machinery
Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24).
Safety of Toy
Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1).
Gas Appliances
Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99).
Pressure Equipment
Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164).
Explosive Atmospheres
Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309).
Medical Devices
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1).
In Vitro Medical Devices
Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
Protective Equipment
Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51).
Marine Equipment
Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146).
Motor Vehicles
Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).
Motor Vehicles
Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
Motorcycles / Quads
Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52).
Rail System
Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44).
Farm / Forest Vehicles
Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1).
Crafts / Watercrafts
Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90).
Civil Aviation Rules
Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1), points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to control them remotely, are concerned.
Civil Aviation Security
Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72).
Part 2: Designated High-Risk AI System
All AI systems identified in Annex III of the Act are considered high-risk by default absent an exception / derogation. Below are details. Note Annex III is subject to change.
AI systems intended for identifying individuals without their active involvement, typically at a distance, through the comparison of an individual’s biometric data with the biometric data contained in a reference database.
AI systems intended to be used for biometric categorization, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics.
AI systems intended to be used for emotion recognition.
AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
AI systems intended to be used to:
AI systems intended to be used to:
AI systems intended to be used for risk assessment and pricing in relation to individuals in the case of life and health insurance.
AI systems intended to:
AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities:
AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies:
AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution.
AI systems intended to be used for influencing the outcome of an election or referendum or the voting behavior of individuals in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which individuals are not directly exposed, such as tools used to organize, optimize or structure political campaigns from an administrative or logistical point of view.
Providers of high-risk AI systems must:
Other parties (like distributors, importers or deployers) assume the same, extensive obligations as providers in relation to “high-risk” AI systems, if they: (i) use the AI system under their name or trademark, (ii) modify the intended purpose of the AI system, or (iii) make a substantial modification to the AI system.
Importers of high-risk AI systems must verify that:
Importers must also:
Distributors of high-risk AI systems must verify that:
Distributors must also:
Deployers of high-risk AI systems must:
The AI Act imposes additional obligations and restrictions on certain types of general-purpose AI models.
August 2, 2025:
For GPAIMs placed on the market on or after August 2, 2025.
August 2, 2027:
For GPAIMs placed on the market before August 2, 2025.
An AI model:
AI models with at least a billion parameters trained with a large amount of data using self-supervision at scale should generally be considered as displaying significant generality and competence to perform a wide range of distinctive tasks.
A general-purpose AI model meeting any of the following criteria will be classified as presenting systemic risk:
Providers of General-Purpose AI Models Must:
Providers of General-Purpose AI Models presenting systemic risk must also:
The AI Act imposes transparency obligations on certain AI systems interacting directly with individuals or exposing individuals to AI-generated content, unless an exception applies.
August 2, 2026
The EU AI Act imposes obligations on providers, importers, distributors, and deployers of AI systems.
Here’s what’s coming:
A guide to the online safety, privacy and harmful content state laws and global regulatory developments that may impact your business.