Shaping Global Governance of Use of Artificial Intelligence in War

  • 0
  • 3119
Font size:
Print

Shaping Global Governance of Use of Artificial Intelligence in War

Context:

The Responsible Use of Artificial Intelligence in the Military Domain (REAIM) summit, which recently took place in Seoul is a key diplomatic effort to establish global norms for military AI applications.

 

More in News:

  • The rise of artificial intelligence (AI) in military applications is driving increased political efforts to regulate its use in warfare.
  • Ongoing conflicts in Ukraine and Gaza are acting as “AI labs,” prompting a diplomatic push to set general norms to manage the risks of military AI.

 

About the REAIM summit:

  • The summit on Responsible Use of Artificial Intelligence in the Military Domain (REAIM) aiming to establish global norms for military AI applications. It is co-hosted by Kenya, the Netherlands, Singapore, and the United Kingdom.
  • The event will gather a diverse group of participants, including governments, international organisations, technology companies, academia, and civil society from around the world.
  • This summit is the second of its kind; the inaugural one was held in February 2023 in The Hague, hosted by the Netherlands. 
  • Although it did not yield major outcomes, it expanded the discussion on military AI and included a broader range of stakeholders.
  • The debate on military AI has largely centred on autonomous weapons, often termed “killer robots,” with concerns that computers and algorithms could dominate warfare.
  • This has led to calls for regulations to ensure that humans remain involved in decisions about the use of force.
  • Since 2019, the United Nations in Geneva has been discussing lethal autonomous weapon systems (LAWS) through a group of governmental experts.
  • The REAIM process expanded the discussion beyond “killer robots” to include a broader range of military AI applications, acknowledging AI’s growing role in warfare.

 

Ethical Dimensions of AI in the Military Domain:

  • Meaningful Human Control: Ensuring that humans retain oversight in AI-driven military decisions is vital for accountability.
  • Example: Israel’s use of AI in targeting systems with minimal oversight raises concerns about life-and-death decisions by machines.
  • Bias and Discrimination: AI systems can inherit biases, leading to unfair target selection.
  • Example: AI in Ukraine’s conflict could disproportionately target specific groups due to biassed data.
  • Unpredictability: AI’s complex algorithms may lead to unintended consequences in combat.
  • Example: U.S. autonomous drone development poses risks of unpredictable AI behaviour in high-stakes situations.
  • Dual-Use Concerns in AI: AI technologies often have dual-use applications, serving both civilian and military purposes
  • The Wassenaar Arrangement is a voluntary export control regime with 42 member states, aimed at increasing transparency and accountability in the transfer of conventional arms and dual-use technologies through regular information exchange.
  • This overlap raises concerns about the militarisation of AI and the risk of its misuse by both state and non-state actors, potentially escalating conflicts and undermining global security efforts.

 

 

Military-Civil Fusion Doctrine in China by the Chinese Communist Party (CCP)

  • Integration of Private Sector: China leverages the private sector under MCF, including 15 National AI Champions. The 2017 National Intelligence Law requires companies to assist in national intelligence work.
  • MCF Concept: It aims to develop dual-use technology for military purposes, reflecting a shift from “military-civilian integration” to a broader “military-civil fusion.”

Six Interrelated Efforts:

  • Defence and Civilian Base Fusion: Integrates China’s defence and civilian technology and industrial bases.
  • S&T Innovation Integration: Leverages science and technology innovations across both sectors.
  • Talent Cultivation: Blends military and civilian expertise.
  • Civilian Infrastructure Use: Incorporates military requirements into civilian infrastructure and uses civilian construction for military needs.
  • Civilian Capabilities for Military Use: Utilises civilian service and logistics capabilities for military purposes.
  • National Defense Mobilization: Expands China’s defence mobilisation to encompass all societal and economic aspects for competitive and wartime use.

 

  • Dehumanisation of Warfare: Increasing reliance on AI may erode the human element in warfare. Example: The movie Captain America: The Winter Soldier raises the following ethical issues. 
  • Lack of Human Oversight: The film raises ethical concerns about AI making critical military decisions without human intervention, highlighting dilemmas of accountability and oversight.

 

IDF Alleged Use of AI in Targeting Hamas

  • AI programs “Lavender” and “The Gospel” reportedly used to identify and target Hamas operatives.
  • Lavender is an AI system developed by the Israeli military to identify suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) in Gaza. It has reportedly marked around 37,000 Palestinians as potential assassination targets with minimal human oversight.

Ukraine’s Development of AI Drones

    • Ukraine is creating AI drones with visual systems to autonomously identify and attack targets and is already targeting Russian military facilities and oil refineries.
    • AI “swarm” drone technology is under development for coordinated, minimal human intervention attacks.

China’s use of Swarm Technology

  • Military Use: China is a significant player in swarm technology, focusing on autonomous drones and swarm tactics for strategic and tactical advantages.

 

  • Real-World Parallels: The depiction of Project Insight parallels current developments in Autonomous Weapons Systems (AWS), reflecting ongoing ethical debates about the use of such technologies in warfare.
  • Accountability Challenges: Determining responsibility for AI actions is difficult in military contexts.
  • There is a pressing need for regulations to ensure that AI systems are used responsibly and that human judgement remains integral in critical decisions.
  • Example: Private firms like Palantir are involved in military operations, which complicates ethical and legal responsibility.

 

Stance of countries on AI use in military domain:

  • US Leadership in Responsible AI Norms
  • The US issued a political declaration on responsible AI in 2023 and introduced national guidelines in 2020.
  • NATO also adopted responsible AI norms in 2021, aiming for military gains with safe AI usage.
  • Growing AI Presence in Warfare
  • AI’s military application is inevitable, aligning with the trend of new technologies being adapted for warfare.
  • The REAIM process seeks global norms to prevent catastrophic outcomes from AI use in military contexts.
  • Global Efforts for AI Governance
  • The US leads AI discussions at the UNGA, co-sponsored by 123 countries, with REAIM offering a more detailed dialogue.
  • Over 50 countries have endorsed the US political declaration on military AI use.
  • India’s Cautious Approach
  • India remains in “watch-and-wait” mode, not endorsing The Hague’s “call to action” but evaluating the long-term impact.
  • Past experiences with nuclear arms control remind India of the importance of shaping global norms early.
  • India advocates for inclusive norms ensuring all nations have a voice in AI regulations.
  • China’s Active Role in Military AI and Challenges for India 
  • China leads discussions on “intelligised warfare” and supported The Hague’s call for responsible AI use.
  • In 2021, China issued a White Paper on regulating military AI applications.
  • Book Insight: The book Strategic Challenges: India in 2030 explores the implications of AI in the military domain for India, particularly in light of high investments by China. 
  • Similarly, Colonel Newsham’s When China Attacks highlights the strategic challenges posed by China’s military AI strategy.

 

Indian Context:

Border Surveillance: The Indian Army has deployed over 140 AI-based surveillance systems along the borders with Pakistan and China. These systems integrate high-resolution cameras, sensors, unmanned aerial vehicle (UAV) feeds, and radar data, which are then analysed using advanced AI algorithms to detect intrusions and classify targets.

An AI-based surveillance software called AGNI-D was unveiled at Aero India 2023, one of Asia’s largest air shows. AGNI-D is deployed in the strategically important eastern Ladakh sector and can recognise movement, weapons, vehicles, tanks, or missiles captured by army surveillance cameras, both live and recorded.

Counter-Terrorism Operations:AI-based real-time monitoring software has been deployed by the Indian Army to generate intelligence in counter-terrorist operations.

While India’s adoption of military AI technology is relatively recent, substantial progress has been made in launching AI-enabled military devices. However, India’s current AI spending of approximately $50 million per year is inadequate compared to its primary strategic challenger, China, which is spending more than 30 times this amount. To avoid falling behind the technology curve, greater investments will have to be made, primarily to promote indigenous industry players. The integration of cutting-edge AI innovations into defence systems is crucial for India to position itself at the forefront of intelligent warfare strategies.

Share:
Print
Apply What You've Learned.
Nations Without a Past: Role of History in Shaping Identity in South Asia
Previous Post Nations Without a Past: Role of History in Shaping Identity in South Asia
Next Post Enemy Properties in India
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x