Font size:
Print
Intersection of AI and Digital Public Infrastructure in India
Context:
In today’s era of rapid technological advancements, Digital Public Infrastructure (DPI) has emerged as a significant innovation in India, enhancing the efficiency and effectiveness of public service delivery.
More on News
- India highlighted DPI’s potential at the 2023 G20 summit, bringing global attention to its transformative capabilities.
- At the same time, artificial intelligence (AI), powered by tools like ChatGPT, has gained prominence for driving efficiency across sectors such as healthcare, education, and public services.
- Given these advancements, an important question arises: Can AI and DPI work together to achieve shared goals? If so, would their integration amplify risks like data misuse, privacy violations, exclusion, and systemic bias?
Potential and Risks of AI-DPI Integration
- Cases such as incorrect automated cash transfers in Telangana and biased fraud detection algorithms in the Netherlands highlight some risks.
- While AI can enhance DPI and improve public services, the realisation of these benefits depends on several factors, including data quality, robust safeguards, appropriate AI model selection, and strong accountability mechanisms.
- Despite these challenges, AI and DPI have been working together for some time in public service delivery. Many processes in such systems can be automated using AI.
- For example, cash transfers for pensions or scholarships can be automatically disbursed once beneficiaries meet predetermined eligibility criteria.
- This process requires two essential components: an interoperable platform connected to databases to determine eligibility—a key feature of DPI—and an AI model to assess eligibility and execute transfers.
- Automated Decision-Making Systems (ADMSs) facilitate these operations.
Existing AI-DPI Applications in India
- Telangana’s Samagra Vedika system identifies and disburses cash transfers to eligible beneficiaries, while smart-city missions utilise similar models to analyse CCTV footage and alert authorities about law-and-order situations.
- Beyond automation, AI models can predict future outcomes.
- Unlike rule-based ADMSs, predictive models rely on self-learning algorithms built through statistical techniques such as machine learning, deep learning, and neural networks.
- These models analyse large datasets to identify patterns, generate insights, and forecast outcomes.
- For instance, they can predict extreme weather conditions, aiding city planning, or analyse traffic congestion to help urban planners design better mobility infrastructure.
- DPI initiatives like the India Urban Data Exchange facilitate such applications by enabling data sharing among city departments, government agencies, private entities, and civil society, improving AI model accuracy and DPI performance in delivering essential public services.
Challenges in AI-DPI Integration
- Ethical and inclusive use of these technologies requires robust safeguards and accountability mechanisms.
- Both AI and DPI rely heavily on data quality and volume.
- While DPI systems enable data sharing across disparate databases, AI models train on this data.
- If these databases are incomplete, biased, unrepresentative, outdated, or contain errors, the AI models built on them may produce harmful and unintended outcomes.
- For example, in Telangana, an algorithm incorrectly identified a deceased rickshaw puller as a motor vehicle owner, resulting in his widow being excluded from welfare benefits.
- In smart cities like Chandigarh, Nagpur, and Indore, maintenance staff must wear trackers that act as surveillance tools, automatically deducting pay if workers deviate from assigned schedules or routes determined by an ADMS.
- Globally, similar challenges exist.
- In the Netherlands, a self-learning algorithm used for a childcare benefits program disproportionately targeted ethnic and racial minorities, flagging non-Dutch nationals as high-risk.
Limitations of Predictive AI
- While traffic forecasts and weather predictions are relatively accurate, predicting life outcomes is much more complex.
- AI cannot account for random shocks, human agency, cultural values, or luck.
- Nevertheless, predictive models have been used in sectors such as healthcare, criminal justice, and education in the United States to determine who should receive better healthcare, be released from jail, or is at risk of dropping out of school.
- Reports suggest similar models have been employed in India for policing through systems such as crime mapping, analytics, and predictive systems used by the Delhi Police to identify potential criminal hotspots.
- However, studies indicate that these models often fall short due to inherent limitations.
In summary, AI-DPI integration requires careful consideration of multiple factors: the quality and quantity of available data, the risk of unintended outcomes, and the limitations of predictive AI. While the synergy between AI and DPI holds great promise, its adoption should be approached with due caution to ensure ethical, effective, and inclusive outcomes.