EU Passes Historic AI Act

EU Passes Historic AI Act

The new EU AI Act, the first comprehensive AI law in the world, aims to regulate the use of artificial intelligence in the European Union as part of the EU's digital strategy. It is designed to ensure better conditions for the development and use of AI technology, fostering benefits such as improved healthcare, safer transport, efficient manufacturing, and sustainable energy. The European Commission proposed this framework in April 2021, categorizing AI systems based on the risk they pose to users, leading to different levels of regulation.

The main objectives of the EU Parliament in the AI legislation are to ensure AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The act emphasizes human oversight over AI systems to prevent harmful outcomes and seeks to establish a technology-neutral, uniform definition of AI that can be applied to future AI systems.

The AI Act introduces different rules based on varying risk levels. AI systems are classified into categories such as unacceptable risk, high risk, generative AI, and limited risk:

  1. Unacceptable Risk: AI systems considered a threat to people will be banned. This includes systems used for cognitive behavioral manipulation, social scoring, and real-time and remote biometric identification like facial recognition, although some exceptions may apply, like for post-event crime investigation with court approval.

  2. High Risk: AI systems that could negatively affect safety or fundamental rights fall under this category. They are divided into two groups: those used in products under the EU's product safety legislation (like toys and medical devices) and AI systems in specific areas like biometric identification, critical infrastructure management, and law enforcement. These systems will be assessed both before market introduction and throughout their lifecycle.

  3. Generative AI: Systems like ChatGPT will need to meet transparency requirements, including disclosing AI-generated content, preventing the generation of illegal content, and summarizing the copyrighted data used for training.

  4. Limited Risk: AI systems in this category should comply with minimal transparency requirements, allowing users to make informed decisions about their use. This includes AI systems that generate or manipulate image, audio, or video content.

Timeline

  1. April 2018: The European Commission published the communication "Artificial Intelligence for Europe." This marked the beginning of a formal discourse on AI regulation at the EU level.
  2. April 2019: The AI High-Level Expert Group presented the "Ethics Guidelines for Trustworthy Artificial Intelligence." This was an important step in establishing ethical standards and guidelines for the development and use of AI within the EU.
  3. February 2020: The European Commission published a "White paper on Artificial Intelligence." This document further expanded on the EU's approach to AI, focusing on fostering ecosystems of excellence and trust in AI technologies.
  4. April 2021: The European Commission presented its proposal for the EU AI Act. This was a significant step towards creating a comprehensive regulatory framework for AI within the EU.
  5. December 2022: The Council of the EU adopted its common position on the AI Act. The Council, representing the member states, brought its perspective and amendments to the proposed legislation.
  6. June 2023: The European Parliament adopted its negotiating position on the AI Act. The European Parliament, representing EU citizens, provided its input and suggestions for the AI Act, moving the legislation closer to adoption.
  7. December 2023: The European Parliament after 33 hours of talks votes and passed the EU AI Act into law, to come into force in 2025.

Impact of the AI Law

The EU AI Act (AIA) is set to have a significant impact on the AI industry in Europe and globally. The Act aims to establish a comprehensive regulatory scheme for artificial intelligence, with some policymakers believing it could set a worldwide standard, a phenomenon known as the "Brussels Effect." However, the actual global impact of the AIA is expected to be more limited and will depend on the specific sector and application of AI.

Key aspects of the AIA's impact include:

  1. AI in Regulated Products: AI systems used in regulated products are expected to be significantly affected globally. This is because to continue exporting to Europe's large consumer market, foreign companies will need to adapt to the AIA's rules. The uniformity in production processes could lead to international conformity, aligning with EU regulations. However, the influence of existing markets, international standards bodies, and foreign governments may limit the EU's unilateral rule-setting power.

  2. High-Risk AI in Human Services: AI systems that are incorporated into online platforms or internationally interconnected platforms will be highly influenced by the AIA. For instance, LinkedIn's AI systems could be universally affected due to the platform's global interconnectedness. However, AI systems that are more localized or individualized, such as those used in hiring or educational technology, might only follow AIA requirements within the EU, leading to a less pronounced Brussels effect.

  3. Transparency Requirements for AI: The AIA imposes transparency obligations for AI systems interacting with people. This will likely lead to global disclosure on most websites and apps, especially for chatbots. While it's a relatively minor technical change, it's expected to have a notable worldwide effect, illustrating the de facto Brussels effect. However, the impact on smartphone apps might differ, as many are already tailored to specific national markets.

Overall, the AIA is expected to influence the development and deployment of many AI systems around the world and inspire similar legislative efforts. However, its global impact is anticipated to be more modest than what EU policymakers might present. The AIA will likely contribute to the EU's global influence over some online platforms but will only moderately shape international regulation. The actual impact will depend on how the AI industry adapts to these new requirements and the extent to which global standards and practices align with the AIA's provisions.

Opposition to the EU AI Act

One of the prominent groups opposing certain aspects of the EU AI Act is Amnesty International. They have expressed significant concerns about the Act's failure to ban public mass surveillance, particularly the use of facial recognition technology. Amnesty International argues that the Act, as it currently stands, effectively greenlights what they term as "dystopian digital surveillance" in the EU Member States. This, they believe, sets a worrying global precedent for AI regulation.

Amnesty International specifically points out the missed opportunity to fully ban facial recognition, which they consider crucial for preventing significant harm to human rights, civic space, and the rule of law. The organization also criticizes the EU AI Act for not banning the export of harmful AI technologies, such as those used for social scoring. According to them, allowing European companies to profit from technologies that could impermissibly harm human rights establishes a dangerous double standard.

Amnesty International has been part of a coalition of civil society organizations, including the European Digital Rights Network (EDRi), advocating for EU artificial intelligence regulation that protects and promotes human rights.

Alia Systems will be unpacking the implications of the Act in the coming days.

Resources