The European Union (EU) has set a precedent by introducing AI law that focuses on high-risk areas of the use of AI technologies.
The bill, praised as “historic” by EU Commissioner Thierry Breton, will introduce a risk-based approach to AI oversight.
The bill adopts a risk-based approach focusing on high-risk areas, such as the government’s use of AI for biometric surveillance. It also throws a regulatory net over ChatGPT and similar systems, requiring transparency before they can be brought to market. The landmark vote follows a December 2023 political agreement and caps months of careful textual coordination for legislative approval.
This agreement means that negotiations will be concluded through a vote by permanent representatives of all EU member states on February 2nd.
This important step sets the stage for the bill to progress through the legislative process, including a vote in a pivotal committee of EU lawmakers scheduled for February 13 and an expected vote in the European Parliament in March or April.
AI law’s approach revolves around the principle that the higher the risk of an AI application, the greater the liability of its developers. This principle is important in important areas such as employment and educational admissions.
Margrethe Vestager, Executive Vice President of the European Commission for a Europe fit for the digital age, emphasized that the focus is on high-risk cases to ensure that the development and deployment of AI technologies is in line with the EU’s values and standards.
Meanwhile, the AI Act is expected to come into effect in 2026, with certain provisions scheduled to come into effect earlier to facilitate the gradual integration of the new regulatory framework.
The European Commission actively supports the EU’s AI ecosystem beyond establishing a regulatory foundation. These efforts include the creation of an AI Secretariat responsible for monitoring compliance with the law, with a particular focus on high-impact underlying models that present systemic risks.
The EU’s AI law is first A comprehensive AI law that aims to regulate the use of artificial intelligence in the EU, ensuring better conditions for deployment, protecting individuals and promoting trust in AI systems.
The bill provides a clear and easy-to-understand approach to regulating AI based on four levels of risk. This will be implemented through national competent market surveillance authorities, with the support of the European AI Secretariat within the European Commission.
Stricter cryptocurrency regulations
The EU has proposed classifying cryptocurrencies as financial instruments and imposing stricter regulations on non-EU cryptocurrency companies. This new proposal will help curb unfair competition and standardize regulations for cryptocurrency companies operating within the EU.
The proposed measures include restrictions on non-EU cryptocurrency companies serving customers in the bloc, in line with existing EU financial laws that require foreign companies to establish branches or subsidiaries within the EU.
Additionally, the European Securities and Markets Authority (ESMA) introduced a second set of guidelines to regulate non-EU-based cryptocurrency companies, emphasizing the importance of regulatory clarity and investor protection.
The EU’s move is part of a broader plan to establish regulatory clarity in the cryptocurrency sector, protect investors and foster the growth of cryptocurrency services within the EU.