The European Union has advanced its regulatory framework for artificial intelligence (AI), with member states voting to approve the final text of the EU’s AI Act.
Commissioner for Internal Market of the EU Thierry Breton confirmed the “endorsement of the political agreement reached in December” 2023 by all 27 member states. In a post on social media platform X, he said the AI Act is historical and a world first.
The AI Act is a risk-based strategy for regulating AI applications. The agreement covers the governmental use of AI in biometric surveillance, how to regulate AI systems like ChatGPT, and the transparency rules to follow before market entry.
Following the December political agreement, efforts began to transform agreed-upon positions into a final compromise text for approval by lawmakers, concluding with the “coreper” vote on Feb. 2, which is a vote of the permanent representatives of all member states.
Experts have indicated significant concern about deepfakes — realistic yet fabricated videos created by AI algorithms trained on online footage — appearing on social media and muddling the line between truth and fiction in public discourse.
Executive Vice President of the European Commission for A Europe Fit for the Digital Age, Margrethe Vestager, said the Friday agreement is a significant step toward the AI Act. She said:
“Based on a simple idea: The riskier the AI, the greater the liabilities for developers. For example, if used to sort applicants for a job or be admitted to an education program. That’s why the #AI Act focuses on the high-risk cases.”
The agreement on Friday came as France withdrew its objection to the AI Act. On Jan. 30, Germany also backed the act after the Federal Minister for Digital Affairs and Transport, Volker Wissing, said a compromise had been reached.
The AI Act is set to proceed toward legislation with a vote by a crucial EU lawmaker committee on Feb. 13, followed by a European Parliament vote in March or April. It is expected to be applied in 2026, with specific provisions taking effect earlier.
The European Commission is taking steps to establish an AI Office to monitor compliance with a group of high-impact foundational models considered to have systemic risk. Additionally, it unveiled measures to support local AI developers, such as upgrading the EU’s supercomputer network for generative AI model training.