Elections, Accountability, and Democracy in the Time of A.I.

  • 0
  • 3016
Font size:
Print

Elections, Accountability, and Democracy in the Time of A.I.

Context:

As of now, over 80 countries, including seven of the 10 most populous, are holding or preparing for elections, representing three billion registered voters across regions like South Asia, the US, EU, and UK.

More on News:

  • These elections occur amid global challenges such as geopolitical tensions, inflation, inequality, and societal polarisation.
  • The rise of AI-driven misinformation, highlighted as a top global risk by the World Economic Forum’s 2024 Global Risks Report, poses significant threats to informed voter decisions. 
  • Foreign and domestic actors, including nations like China, Russia, and Iran, are increasingly using AI tools to manipulate public opinion, disrupt election integrity, and exploit vulnerabilities like voter fraud narratives. 
  • Agencies like the FBI and CISA warn that these tactics aim to undermine trust during critical election phases and gather data for future interference efforts.

Technology and Democracy: From Early Promise to Emerging Dangers:

In July 2024, a viral “deepfake” video mimicking Vice President Kamala Harris with a voice-cloning tool spread disinformation, portraying her as incompetent and labelling her a “diversity hire.” 

  • Narratives of Liberation and Control: Foreign interference in democratic elections is not new. 
    • For instance, under the Monroe Doctrine, the US supported a military junta in Chile to overthrow its democratically elected government in 1973. 
    • Russia’s interference in the 2016 US presidential election remains a striking example.
  • Early Optimism About Digital Tools: Digital platforms like WhatsApp, Facebook, and YouTube initially empowered citizens, enabling better campaign access, fundraising, and feedback. 
    • User data allowed political actors to personalise outreach, while AI tools like Machine Learning enhanced predictive insights and engagement. 
  • Emerging Threats in the AI Era: While AI holds potential for democratic improvements, its misuse endangers representation, accountability, and trust. 
    • A May 2024 study suggests that while voters’ decisions are often premeditated, persuasive AI-generated audio-visual content poses a significant risk of swaying opinions, particularly among marginal groups.

AI in the Electoral Process: Insights from South Asia:

In South Asia, where over a billion voters participated in elections in early 2024, trends highlight the growing influence of AI and misinformation in shaping democratic processes.

  • Bangladesh: In the January 2024 elections, disinformation campaigns targeted the opposition Bangladesh National Party (BNP). 
    • Additionally, misinformation campaigns targeted the US government for pressuring the ruling Awami League to ensure a fair election, with disinformation networks generating revenue from malicious content.
  • Pakistan: In February 2024, former Prime Minister Imran Khan leveraged AI-generated audiovisual messages to claim election victory while in prison, despite a crackdown on his party, Pakistan Tehreek-e-Insaf (PTI). 
  • Sri Lanka: During Sri Lanka’s September 2024 presidential elections, a “shallowfake” video of Donald Trump endorsing National People’s Power leader Anura Kumara Dissanayake caused political backlash. 
  • India: India’s general elections (April–June 2024) involved 969 million voters. AI facilitated large-scale surveys, sentiment analysis, and voter personalisation while improving vote counting and fraud prevention. 
    • Efforts to counter misinformation included a WhatsApp helpline from the Misinformation Combat Alliance and Meta, as well as interventions by the Election Commission of India (ECI) under Article 324 of the Constitution. 
    • While India saw limited deepfake content and no direct evidence of foreign AI interference, OpenAI disrupted anti-BJP activity by an Israeli firm generating pro-Congress social media content. 

The Imperatives of Resilience, Governance, and Awareness:

Advancements in Artificial Intelligence (AI), particularly Generative AI, are reshaping content creation, necessitating a transformative approach to combating misinformation. 

  • Detection (Debunking): A range of forensic tools and techniques can identify deepfakes and misinformation. 
    • These include analysing unnatural blinking, distortions in features, inconsistencies in lighting within videos, mismatches between speech and mouth movements, and the absence of biometric markers specific to known individuals. 
  • Prevention (Content Provenance): The content provenance approach focuses on embedding watermarks or metadata into digital content. 
    • This metadata records the creator, creation time, and method of production, enabling platforms and users to verify authenticity. 
    • Standards like the Coalition for Content Provenance and Authenticity (C2PA) have gained traction. C2PA’s framework binds provenance data to media from creation through editing, ensuring a tamper-evident record that can be verified by users or downstream systems. 

Regulation: Transparency, Accountability, and Enforcement

Global AI regulation requires both international frameworks and national policies.

  • The EU AI Act: A Comprehensive Framework: On May 21, 2024, the European Parliament adopted the EU AI Act, the world’s first comprehensive AI regulation. 
    • Prohibitions under Article 5 ban systems that exploit subliminal techniques, endanger informed decision-making, or enable harmful surveillance. Violators face significant fines, ranging from €7.5 million to 7% of global turnover.
  • USA: Countries like the US have adopted voluntary compliance models emphasising industrial autonomy.
  • China: China focuses on commercial development and state control. 
  • ASEAN: The ASEAN region promotes business-friendly ethical guidelines for AI governance.

India’s Policy Prerogatives:

  • India is in the early stages of developing a national AI regulatory framework, drawing lessons from global approaches. 
  • Recent efforts include leveraging existing laws like the Information Technology Act and issuing advisories to social media platforms to combat deepfake content. However, reactive measures risk stifling innovation and eroding trust.
  • Indian policymakers are encouraged to adopt globally validated principles of democratic governance, such as transparency, accountability, and citizen rights. 
  • Establishing an AI Safety Institute (AISI) and addressing strategic, tactical, and technical priorities will be crucial:
    • Strategic Goals: Avoid the “developing nation” trap by participating actively in global AI governance and enforcing chosen regulatory rules.
    • Tactical Measures: Maintain regulatory flexibility, conduct risk assessments, and balance innovation with accountability.
    • Technical Development: Address complexities in AI, including data ownership, algorithmic transparency, and intellectual property, while fostering harmonised compliance standards.

Public Awareness and Literacy:

  • The rise of deepfakes has created a phenomenon known as the “Liar’s Dividend,” where increasing scepticism allows misinformation to flourish unchecked. 
  • Simple communication strategies, such as Microsoft’s educational primer on deepfakes, and collaborations between media platforms and public figures can help raise awareness. 
  • Universities in India have begun offering courses on digital disinformation, signalling a growing societal response to these challenges.
Share:
Print
Apply What You've Learned.
Previous Post India’s Trade Deficit: A Misunderstood Indicator
Next Post Accessibility, Dignity in Indian Prisons
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x