Font size:
Print
Shaping Global Governance of Use of Artificial Intelligence in War
Context:
The Responsible Use of Artificial Intelligence in the Military Domain (REAIM) summit, which recently took place in Seoul is a key diplomatic effort to establish global norms for military AI applications.
More in News:
- The rise of artificial intelligence (AI) in military applications is driving increased political efforts to regulate its use in warfare.
- Ongoing conflicts in Ukraine and Gaza are acting as “AI labs,” prompting a diplomatic push to set general norms to manage the risks of military AI.
About the REAIM summit:
- The summit on Responsible Use of Artificial Intelligence in the Military Domain (REAIM) aiming to establish global norms for military AI applications. It is co-hosted by Kenya, the Netherlands, Singapore, and the United Kingdom.
- The event will gather a diverse group of participants, including governments, international organisations, technology companies, academia, and civil society from around the world.
- This summit is the second of its kind; the inaugural one was held in February 2023 in The Hague, hosted by the Netherlands.
- Although it did not yield major outcomes, it expanded the discussion on military AI and included a broader range of stakeholders.
- The debate on military AI has largely centred on autonomous weapons, often termed “killer robots,” with concerns that computers and algorithms could dominate warfare.
- This has led to calls for regulations to ensure that humans remain involved in decisions about the use of force.
- Since 2019, the United Nations in Geneva has been discussing lethal autonomous weapon systems (LAWS) through a group of governmental experts.
- The REAIM process expanded the discussion beyond “killer robots” to include a broader range of military AI applications, acknowledging AI’s growing role in warfare.
Ethical Dimensions of AI in the Military Domain:
- Meaningful Human Control: Ensuring that humans retain oversight in AI-driven military decisions is vital for accountability.
- Example: Israel’s use of AI in targeting systems with minimal oversight raises concerns about life-and-death decisions by machines.
- Bias and Discrimination: AI systems can inherit biases, leading to unfair target selection.
- Example: AI in Ukraine’s conflict could disproportionately target specific groups due to biassed data.
- Unpredictability: AI’s complex algorithms may lead to unintended consequences in combat.
- Example: U.S. autonomous drone development poses risks of unpredictable AI behaviour in high-stakes situations.
- Dual-Use Concerns in AI: AI technologies often have dual-use applications, serving both civilian and military purposes.
- The Wassenaar Arrangement is a voluntary export control regime with 42 member states, aimed at increasing transparency and accountability in the transfer of conventional arms and dual-use technologies through regular information exchange.
- This overlap raises concerns about the militarisation of AI and the risk of its misuse by both state and non-state actors, potentially escalating conflicts and undermining global security efforts.
- Dehumanisation of Warfare: Increasing reliance on AI may erode the human element in warfare. Example: The movie Captain America: The Winter Soldier raises the following ethical issues.
- Lack of Human Oversight: The film raises ethical concerns about AI making critical military decisions without human intervention, highlighting dilemmas of accountability and oversight.
- Real-World Parallels: The depiction of Project Insight parallels current developments in Autonomous Weapons Systems (AWS), reflecting ongoing ethical debates about the use of such technologies in warfare.
- Accountability Challenges: Determining responsibility for AI actions is difficult in military contexts.
- There is a pressing need for regulations to ensure that AI systems are used responsibly and that human judgement remains integral in critical decisions.
- Example: Private firms like Palantir are involved in military operations, which complicates ethical and legal responsibility.
Stance of countries on AI use in military domain:
- US Leadership in Responsible AI Norms
- The US issued a political declaration on responsible AI in 2023 and introduced national guidelines in 2020.
- NATO also adopted responsible AI norms in 2021, aiming for military gains with safe AI usage.
- Growing AI Presence in Warfare
- AI’s military application is inevitable, aligning with the trend of new technologies being adapted for warfare.
- The REAIM process seeks global norms to prevent catastrophic outcomes from AI use in military contexts.
- Global Efforts for AI Governance
- The US leads AI discussions at the UNGA, co-sponsored by 123 countries, with REAIM offering a more detailed dialogue.
- Over 50 countries have endorsed the US political declaration on military AI use.
- India’s Cautious Approach
- India remains in “watch-and-wait” mode, not endorsing The Hague’s “call to action” but evaluating the long-term impact.
- Past experiences with nuclear arms control remind India of the importance of shaping global norms early.
- India advocates for inclusive norms ensuring all nations have a voice in AI regulations.
- China’s Active Role in Military AI and Challenges for India
- China leads discussions on “intelligised warfare” and supported The Hague’s call for responsible AI use.
- In 2021, China issued a White Paper on regulating military AI applications.
- Book Insight: The book Strategic Challenges: India in 2030 explores the implications of AI in the military domain for India, particularly in light of high investments by China.
- Similarly, Colonel Newsham’s When China Attacks highlights the strategic challenges posed by China’s military AI strategy.