India’s AI ambition cannot be built on women’s digital insecurity

India’s AI ambition cannot be built on women’s digital insecurity

Blogs


India is in an AI surge moment, and this time it is not just hype. The India AI Impact Summit in New Delhi (16–20 February 2026) has already crossed 35,000 registrations, with participation expected from more than 100 countries. Official messaging frames it as the largest of the global AI summits held so far.

The momentum is real. IndiaAI was approved in 2024 with a Rs 10,371.92 crore outlay. Compute capacity has moved far beyond the original 10,000-GPU benchmark: it crossed 34,000 GPUs in 2025, and later government updates stated 38,000+ GPUs onboarded, with subsidized access around Rs 65 per GPU hour. That is not symbolic policy; that is hard infrastructure.

Market signals reinforce that story. OpenAI has publicly identified India as its second-largest market, with usage tripling year-on-year in 2024. India is also funding domestic model-building, including BharatGen and multilingual AI efforts.

This should be a national confidence story.

But there is a brutal contradiction inside it: when AI scales faster than safeguards, women become the first testing ground for failure.

Deepfakes are no longer theoretical. During the 2024 election cycle, fake AI-generated videos of Aamir Khan and Ranveer Singh spread rapidly, triggering police complaints and emergency fact-checking. The problem was not merely content creation; it was distribution velocity outrunning verification velocity.

And this is not just political theatre. Exchanges and market institutions have had to warn investors against fake videos and impersonation-led scams. Deepfakes are now an electoral risk, a financial risk, and a public-trust risk at the same time.

Public exposure is already high. A widely cited 2024 survey found roughly three in four Indians had encountered deepfake content in the previous year, with political deepfakes increasingly visible. Even if we treat survey numbers cautiously, directionally the trend is obvious: synthetic deception is now mass-scale.

Now layer that onto India’s existing violence baseline. NCRB-linked official data cited in Parliament and government communication shows crimes against women rising from 4,28,278 (2021) to 4,45,256 (2022), with the crime rate per lakh women also climbing. Synthetic media does not replace this violence; it compounds it.

What follows is a new form of harm: epistemic violence.

“Did this happen?” becomes a weapon.
“Can you prove it wasn’t you?” becomes the burden.

And that burden is not abstract. It can destroy employment prospects, relationships, political candidacies, and mental stability long before any FIR, cyber cell, or court catches up.

Policy has moved. Protection still hasn’t

To be fair, India has not been passive. The government has taken several concrete steps.

After public concern on deepfakes escalated in late 2023, MeitY issued advisories (November and December 2023) reiterating intermediary obligations under existing IT Rules, including the need to act on unlawful content rapidly.

Then came the stronger move: draft 2025 amendments proposing an explicit legal category for “synthetically generated information,” plus mandatory labeling thresholds (10 percent visibility for visual content / 10 percent duration for audio), provenance expectations, and due-diligence duties.

Election governance also began adapting. The Election Commission directed campaign-related synthetic content to carry clear disclosures and labels, and laid down fast-response expectations for harmful manipulated media.

So yes, the legal vocabulary is improving.

But social harm still moves faster than institutional response.

If a fake clip hits family WhatsApp groups at 11:40 p.m., most victims still face the same maze: platform report forms, police uncertainty, fragmented jurisdiction, slow evidence handling, and reputational damage happening in real time.

That gap between harm-time and response-time is India’s real AI governance problem.

Five shifts India now needs

If India genuinely wants to lead on trustworthy AI, not just competitive AI, it must move beyond bureaucratic patchwork. Here is what real protection looks like:

1. Provenance by default, not by apology

Every AI-generated public-facing image, audio, or video should carry persistent metadata plus visible user-facing labels at both creation and distribution layers. This cannot be optional, buried in terms of service, or subject to platform discretion. The October 2025 amendments are a start, but implementation timelines, verification standards, and enforcement mechanisms need clarity.

2. A women-first emergency response lane

Create a 24×7 high-priority reporting rail across platforms and cybercrime channels specifically for non-consensual synthetic sexual content and impersonation. Time matters more than perfect paperwork. Currently, victims navigate bureaucratic pinball between platforms, police, and courts. A unified emergency protocol, modeled on emergency services, could dramatically reduce response time and secondary trauma.

3. One harmonized compliance spine

Align MeitY rules, Election Commission protocols, platform trust-and-safety norms, CERT-In advisories, and law enforcement workflows into one interoperable standard. Right now, different agencies issue overlapping but non-identical requirements. Platforms face compliance confusion; victims face jurisdiction shopping. A single, clear framework would benefit everyone except bad actors.

4. Burden shift in evidence and verification

In severe synthetic abuse cases, the system should not make victims perform digital forensics to be believed. Require platforms and model providers to preserve logs, provenance trails, and rapid-response documentation. Shift the burden of proof from “prove this was fake” to “prove you took reasonable care to prevent and respond.”

Currently, a woman reporting a deepfake must:
(a) prove it is synthetic,
(b) identify who created it,
(c) find which platform it originated from,
(d) navigate takedown processes across multiple platforms,
(e) file FIRs with police who may lack technical capacity, and
(f) wait for judicial proceedings.

This is structurally designed to exhaust victims.

5. Public literacy with legal literacy

“Don’t trust everything online” is not enough. People need a practical protocol: verify source, check context, preserve evidence, report on the correct channel, and access legal aid quickly. This requires integration with Bhashini (India’s multilingual AI platform) to deliver accessible, vernacular guidance on identifying and responding to synthetic media.

Why this matters beyond India

Globally, misinformation and disinformation rank among the top short-term risks according to the World Economic Forum’s Global Risks Report 2024. AI-related harms are moving sharply up long-term risk rankings. The EU’s AI Act includes transparency obligations for AI-generated content, with key provisions coming into force in 2026.

But India has a unique opportunity.

The India AI Impact Summit can become more than a diplomatic showcase or vendor exhibition. It can be the place where India defines a Global South doctrine of AI legitimacy: scale plus dignity, innovation plus accountability, compute plus consent.

Because if a woman can be socially erased by a fake before breakfast, your AI ecosystem is not world-class. It is just well-funded chaos.

The real test of leadership

India’s AI story is compelling: affordable compute, indigenous models, multilingual capabilities, massive scale.

But the true test of India’s AI leadership is not how many models we host, how many startups we incubate, or how many GPUs we deploy.

It is how quickly and fairly we can restore truth when someone’s life is being rewritten by synthetic lies.

That is the future-defining metric. That is what separates AI leadership from AI opportunism.

As the India AI Impact Summit convenes, the world will be watching not just India’s technological prowess, but its moral clarity. The question is not whether India can build AI. The question is whether India will build AI that protects everyone, or just AI that scales.

The answer will define whether India’s AI decade is remembered as a moment of transformation, or just another wave of technological colonialism where women’s bodies and reputations become the testing ground for innovation without accountability.

India has the technical capacity, the institutional will, and the democratic values to get this right.

The Summit is the moment to prove it — not with draft amendments and advisories, but with functioning, accessible, victim-centered infrastructure.

Anything less is not AI leadership.

It is just AI theatre.



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *