Font size:
Print
EU Artificial Intelligence Act: AI in the Balance
Context:
- The increasing reliance on Artificial Intelligence (AI) across critical sectors like military, healthcare, and finance has raised concerns about accountability, transparency, and fairness in decision-making.
- In response, many countries are developing frameworks to regulate AI. A key milestone in this effort is the European Union’s May 2024 Artificial Intelligence Act (AI Act).
About the EU AI Act:
- Definition of AI: The AI Act defines Artificial Intelligence as systems that operate with varying autonomy, potentially adapting after deployment, and using inputs to generate outputs like predictions, recommendations, or decisions that influence environments.
- The Act’s regulatory framework prioritises consumer safety through a risk-based approach, where AI products are categorised based on the potential harm they pose.
- Risk is determined by the likelihood and severity of harm, ensuring that products with higher risks undergo greater scrutiny, while lower-risk systems are subject to lighter or voluntary regulations.
- This structure aims to balance innovation with public safety.
- Risk-based Categorisation: The AI Act categorises AI systems into three risk levels:
- Unacceptable Risk: AI systems that manipulate users, engage in non targeted facial image scraping, predictive policing, or social scoring are banned.
- High Risk: Includes AI in healthcare, education, electoral processes, job screening, and infrastructure; subject to strict regulations.
- Limited/Minimal Risk: General purpose AI with low risks, like music recommendations, subject to fewer or no regulations.
- Scope and Applicability: The EU aims to replicate the “Brussels effect”, influencing global AI regulations like it did with GDPR(General Data Protection Regulation).
- The Act applies to AI systems marketed or used within the EU, even if developed outside the EU.
- Military and research applications are excluded.
- Penalties: Non-compliance incurs fines up to EUR 15 million or 3% of global turnover.
- Start-ups and SMEs face lower penalties to encourage innovation.
Implications of the AI Act:
- Fostering Innovation with Oversight: The AI Act aims to promote AI innovation while ensuring transparency, accountability, and the protection of citizens.
- Challenges with AI Definition: The Act’s definition of AI is seen as reductionist, assuming stable objectives based on inputs.
- However, AI systems can evolve and infer unintended objectives, creating unpredictable outcomes.
- Limitations of Product-Based Approach: Unlike traditional products, AI is dynamic and continuously evolving.
- This creates regulatory challenges, especially with generative AI like ChatGPT, which can shift from benign to harmful outputs.
- Innovation and Competition Concerns: There are fears that high regulatory costs might stifle competition, particularly for startups and smaller companies in the AI sector.
- Larger corporations may dominate due to better access to data and resources.
- Exemptions for Military, Research, and National Security AI: The blanket exemption for military and research AI under the AI Act raises concerns about potential misuse, similar to past scandals like Cambridge Analytica.
- Additionally, the exclusion of national security from the Act’s provisions could enable AI-driven surveillance and discrimination, particularly targeting marginalised communities.