Are Humans Still Central? Surveillance Capitalism and AI in India

 

Are Humans Still Central?

Surveillance Capitalism and AI in India

By Rahul Ramya

Until the year 2000, before Google, Facebook, Apple, Microsoft, and other platform empires rose to dominance, humans remained the pivot of all social design. Markets, public institutions, and political systems — even at their worst — revolved around human deliberation, moral accountability, and collective reasoning. Technology served as an amplifier of human capability, not a substitute for it. The value of progress was measured by its contribution to human flourishing.

That moral geometry has been overturned. Platforms no longer simply serve; they observe, infer, predict, and steer. The architecture of the public sphere has been refashioned into an apparatus of extraction. Every click, pause, and gesture is recorded, transformed into data, and traded. People are no longer primarily citizens or moral agents; they are streams of behavioral information—predictable, manageable, and replaceable. Conscience has been surrendered to code; judgment outsourced to algorithms.


A Digital Gold Rush: Mining Human Experience

Shoshana Zuboff calls this order surveillance capitalism: an unprecedented system that claims human experience as free raw material for translation into behavioral data. It is, in essence, a digital gold rush — but the mines lie within human life itself.
Our attention, emotion, trust, and fatigue are extracted, refined into predictive insights, and monetized for power and profit.

In this economy, experience becomes commodity, and prediction becomes profit. The platform is no longer a neutral stage for interaction; it has become the playwright that scripts what we think, desire, and choose. Prediction replaces participation, and efficiency eclipses empathy. The world is no longer human-centered; it has become algorithm-centered.


Surveillance Capitalism: The New Economic Order

Surveillance capitalism thrives by transforming private human life into predictive commodities. The Cambridge Analytica scandal (2018) made this architecture visible: a British data firm illegally harvested the personal information of over 80 million Facebook users to influence elections in the United States, the United Kingdom’s Brexit referendum, and India’s 2014 and 2019 general elections. Voter behavior was not just observed — it was engineered through micro-targeted manipulation using psychological and caste-based segmentation.

Such algorithmic governance of belief systems converts democracy into a psychological marketplace, where the citizen’s mind becomes the terrain of profit.

Global responses like the EU’s General Data Protection Regulation (GDPR) have tried to restore some notion of human sovereignty over data. India’s Digital Personal Data Protection Act (2023) marks a tentative beginning — but it still lacks teeth: no independent regulator, no mandatory algorithmic audits, and minimal citizen redressal.


India: Laboratory and Battleground

India’s digital transformation stands at a paradoxical intersection. With its massive population and deep social hierarchies, the country is simultaneously a laboratory for inclusion and a battleground of exclusion.
Surveillance capitalism here magnifies pre-existing inequalities — of caste, class, gender, and language — embedding them invisibly into machine systems that lack moral or constitutional sensitivity.

1. Exclusion from Welfare: When Efficiency Becomes Deadly

Aadhaar was conceived as a mechanism to deliver welfare efficiently. But the same system has often turned bureaucratic error into human tragedy.

Case: Santoshi Kumari, Jharkhand (2017) — An 11-year-old girl in Simdega district died pleading for rice after her family’s ration card was cancelled due to Aadhaar-linking failure. Reports documented over 25 similar starvation deaths across India. The algorithm refused recognition, and the local administration refused discretion.
When governance forgets empathy, efficiency becomes cruelty, and procedure replaces pity.

2. Manipulating Voting Behavior: Markets Enter the Polling Booth

Data-driven consultancies now use AI-based sentiment analysis and caste-segmentation to influence electoral outcomes. The 2019 elections witnessed a proliferation of digital micro-targeting where different communities received entirely different narratives based on algorithmic predictions of their fears and hopes.

Democracy, once an arena of shared debate, risks becoming an individualized theatre of manipulation — a democracy of managed perception.

(Transition)
The logic of micro-targeting does not remain confined to elections. Once the human being is reduced to a cluster of data attributes, the same logic travels into welfare, work, and finance. The political manipulation of identity merges seamlessly with the economic programming of opportunity.

3. Caste and AI Biases: Prejudice Made Programmable

Algorithms for recruitment, credit scoring, and social media visibility often replicate social prejudices. A Centre for Internet and Society (2022) study found that applicants with lower-caste surnames were 40% less likely to receive callbacks from automated hiring systems. Similarly, digital lending apps penalize users without strong digital footprints — typically rural Dalits, Adivasis, and women — classifying them as “high-risk.”
Under the façade of neutrality, AI reproduces caste-coded exclusions that India’s Constitution sought to abolish.

4. Deepfakes and Gendered Violence

AI’s capacity for image manipulation has birthed new forms of gendered harm.

Case: Delhi Deepfake Incident (2023) — A morphed video of a Delhi college student went viral, produced using open-source AI tools. The outrage was national, but the accountability was nil. Platforms claimed helplessness before “automated moderation.” Similar attacks have targeted journalists, activists, and politicians like Mahua Moitra.

As per the BBC (2023), deepfake incidents worldwide rose by over 400%. In India, they operate as a new weapon of patriarchal humiliation — technology disrobing women while platforms remain profitably indifferent.

5. Credit and Opportunity: Algorithmic Poverty

Fintech platforms use AI to assess creditworthiness through digital transaction histories and mobile behavior patterns. Those who are digitally invisible — informal workers, rural women, or migrants — are automatically rated “high risk.”
The NITI Aayog Working Paper on Responsible AI (2022) acknowledged such bias, yet private fintech algorithms remain opaque. Poverty becomes algorithmically self-perpetuating.

6. Crises and the Digital Divide

During the COVID-19 pandemic, platforms like CoWIN required smartphone access and English literacy. Millions of migrant workers and rural citizens were unable to register or access vaccination slots.
In moments that demand solidarity, technology often amplifies separation.


The Infrastructure Paradox: India Stack and UPI

India’s digital public infrastructure — including India Stack and the Unified Payments Interface (UPI) — is globally celebrated for enabling inclusion. Millions now access banking, subsidies, and identification seamlessly.

Yet these systems reveal an infrastructure paradox. Inclusion at scale often conceals exclusion in design. Interfaces that operate predominantly in English or Hindi marginalize non-literate and non-dominant language users. For a Tamil-speaking worker in Kerala or a Gondi-speaking villager in Chhattisgarh, the celebrated architecture of “digital empowerment” often becomes an alien script.

Without linguistic and cultural empathy, inclusion remains statistical, not substantive. The result is a digital caste system—engineered by design, justified by data.


The New Captivity of Human Will

Machines Optimize; Humans Empathize

AI-driven systems have not only displaced humans from governance — they have abducted human volition itself. Algorithms now predict what we will desire, recommend what we will consume, and even shape what we believe.

This invisible conditioning erodes autonomy. We begin to act not out of reflection, but out of algorithmic suggestion. The illusion of choice replaces freedom itself.

Machines optimize; humans empathize. Machines calculate probabilities, but cannot feel hunger or sorrow. They refine precision, but lack remorse. When governance forgets this difference, dignity dies quietly.


The Triangle of Dignity, Data, and Democracy

To hold the moral architecture of the digital age, we need a framework — a simple visual compass that keeps human values at the center:

[ DIGNITY ] / \ / \ [ DATA ]---[ DEMOCRACY ]
  • If Data grows without accountability, it erodes Dignity.

  • If Democracy yields to algorithmic opacity, it loses legitimacy.

  • Only when Dignity governs the relationship between Data and Democracy can technology remain truly humane.

This triangle must guide digital policymaking: every platform, every public AI deployment, must pass the Dignity Test — Does it preserve, violate, or ignore the human?


When Democracies Outsource Their Judgment

Governments increasingly outsource public judgment to private algorithms — from welfare verification to predictive policing. The result is not just administrative efficiency but constitutional abdication.

Article 21 of the Indian Constitution guarantees the right to life and dignity. When food is denied due to biometric mismatch, or when a woman’s image is digitally violated without remedy, it is not a technical lapse but a constitutional betrayal.

Across the world, similar betrayals echo: predictive policing targeting Black communities in the US, welfare exclusion of the poor in Brazil and South Africa, algorithmic overreach in Southeast Asia.
When democracies surrender discretion to machines, justice becomes a statistical illusion.


Resistance and Reclamation

Yet resistance is growing. The Internet Freedom Foundation (IFF) has challenged Aadhaar’s overreach in court. Kerala’s Kudumbashree initiative has expanded digital literacy among women. Activists, journalists, and technologists are demanding algorithmic audits and data transparency.

To reclaim the moral center, India needs decisive reforms:

  • Algorithmic transparency and public audit of Aadhaar and welfare systems.

  • A National AI Ethics Board with legal authority.

  • Mandatory human review for all automated welfare denials.

  • Multi-lingual inclusion mandates across India Stack and UPI.

  • Strict deepfake legislation ensuring swift redress for victims.

  • National digital literacy campaigns grounded in constitutional rights.

These are not technical fixes; they are democratic necessities. They restore the triangle of Dignity–Data–Democracy to balance.


Conclusion: Re-centering the Human

The question, Are humans still central? must be answered with honesty: Not anymore—not unless we act.

Surveillance capitalism has displaced conscience with code, empathy with efficiency. The poor die from algorithmic exclusion; women are violated by deepfakes; citizens are manipulated by invisible data engines.
But technology is not destiny.

We can reclaim the center — if we insist on the moral primacy of the human being. The future must belong to ethical intelligence, not artificial intelligence. The task is to ensure that technology once again becomes a servant of humanity, not its master.

If we fail, we will remain digitally visible but morally invisible—counted, tracked, analyzed, yet stripped of our sovereignty and soul.
If we succeed, we will restore to the digital age its lost compass: the enduring truth that machines may process data, but only humans can defend dignity.


References

  1. Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.

  2. Cambridge Analytica Scandal Reports — The Guardian (2018); New York Times (2018).

  3. European Union. General Data Protection Regulation (GDPR). Official Journal of the European Union, 2016.

  4. Government of India. Digital Personal Data Protection Act, 2023.

  5. Centre for Internet and Society (CIS). AI and Caste Bias in Automated Decision-Making Systems in India. Report, 2022.

  6. BBC. “Deepfake Abuse Rising 400% Worldwide.” BBC News, October 2023.

  7. NITI Aayog. Working Paper on Responsible AI for All, 2022.

  8. Human Rights Watch. “India: Ration Exclusions and Aadhaar-linked Starvation Deaths.” HRW Report, 2019.

  9. Internet Freedom Foundation (IFF). Petitions on Digital Rights and Data Protection in India, 2021–2024.

  10. Kudumbashree Mission, Government of Kerala. Digital Literacy and Gender Empowerment Report, 2022.

  11. The Constitution of India, Article 21 — Right to Life and Personal Liberty.

  12. UNESCO. Ethical Framework for Artificial Intelligence, 2021.



Comments

Popular posts from this blog

Looking Beyond Eyes: What We Lose When Technology Watches Us

Looking Beyond Eyes: What We Lose When Technology Watches Us ANOTHER VERSION

Return to the Unprecedented: Understanding the Logic and Power of Surveillance Capitalism