AI regulatory milestones
The European Parliament has made history by approving the EU AI Act, one of the world’s first comprehensive regulations on artificial intelligence. The bill aims to ensure that AI development within the European Union is trustworthy, safe and respects fundamental rights while promoting innovation.
Key content of EU AI law
EU AI law classifies AI applications into four risk-based categories, with the high-risk model subject to the strictest rules. The bill would ban “unacceptably risky” AI systems that pose clear threats to safety, livelihoods and rights, such as government social scoring or toys that encourage risky behavior.
Impact on AI Applications
High-risk applications include critical infrastructure, education, safety components of products, essential public services, and law enforcement. Limited-risk applications focus on transparency in the use of AI, including ensuring users are aware when interacting with an AI chatbot.
Implementation and Enforcement
The EU AI law is set to undergo minor linguistic changes at the translation stage before a final vote in April and publication in the official EU journal in May. The ban on prohibited practices will begin to take effect in November and a timeline for mandatory compliance will be set.
Response from industry and experts
EU AI law has drawn criticism from some tech companies concerned about over-regulation, but others, such as IBM, have praised the bill for its risk-based approach and commitment to ethical AI practices. Christina Montgomery, IBM’s vice president and chief privacy and trust officer, praised the EU’s leadership in passing the bill.