For years, the internet was prestigious as the world’s most democratic invention-a global town square where every resider could speak, organise and influence. That optimism has collapsed. Today, the wrestle for self-ruling speech is no longer well-nigh who may speak; it is well-nigh who gets to be heard. And that visualization lies not with courts or parliaments, but with private algorithms operating inside opaque rooms of global platforms.
The first truth of the digital age is stark: self-ruling speech is not the same as self-ruling reach. A post may survive online but be silently throttled, demoted or buried. A video may not be taken lanugo but may wilt invisible. Visibility, not censorship, is now the real instrument of power. Every resider today speaks into a system that constantly decides-through code-whether their words deserve oxygen.
This immense influence rests with platforms that still requirement neutrality. They are not neutral. Their moderation systems, once designed to fight spam, now function as ideological filters. Three forces pinpoint this new landscape: moderation, manipulation and the swoon of neutrality.
Moderation has evolved from a technical function into a political one. Platforms deploy reactive moderation (after complaints), proactive moderation (AI scanning every post surpassing it appears), and algorithmic moderation (ranking, sepulture or boosting content). One minor policy transpiration can wipe out a creator’s livelihood or misconstrue a public conversation. Most users have no idea how engineered their worldview has become.
Manipulation is the second gravity — and perhaps the increasingly dangerous one. Outrage is profitable. Conflict retains attention. Anger drives engagement. The feed is not a reflection of public sentiment; it is a persuasion machine optimised for platform economics. Governments, political parties and interest groups exploit this tracery to push narratives, suppress criticism or influence elections. Meanwhile, AI-personalised feeds create customised political realities for every citizen.
This leads to the third force: the death of platform neutrality. BigTech companies can no longer pretend they are mere conduits of information. They are now the world’s most powerful editors — but editors without accountability, transparency or a public mandate.
India is at the centre of this global battle. The Intermediary Rules (2021/2023) require platforms to comply swiftly with takedown orders and enable message traceability. The Digital Personal Data Protection Act creates a new privacy regime but grants wide exemptions to the state. Deepfakes are exploding, expressly during elections, and India still lacks a defended law to gainsay them.
Meanwhile, courts are rhadamanthine digital referees, hearing petitions on takedowns, worth suspensions and online speech scrutinizingly weekly. India’s decisions will shape global norms, expressly as other democracies watch how the world’s largest digital public sphere balances rights, regulation and platform power.
Internationally, the picture is fractured. The United States protects speech aggressively; the European Union enforces strict content moderation; China maintains well-constructed state control; and India is moving towards a hybrid regulatory model. The world is no longer governed by one self-ruling speech norm — it is now a patchwork.
The new wrestle for self-ruling speech is stuff fought inside recommendation engines, not rallies. Democracies must confront a nonflexible truth: the public square now belongs to private companies, whose incentives are commercial, not constitutional.
If democratic expression is to survive meaningfully in the digital age, the world needs a new social contract one built on algorithmic transparency, platform peccancy and empowered citizens. Until that happens, algorithms will decide the truth, and democracies will withstand the consequences.
Himanshu Shekhar Group Editor, U.P. & Uttarakhand Dainik Bhaskar

