The Bot War on X

By Matthew Parish, Associate Editor
Wednesday 8 April 2026
Recent disclosures by the social media platform X (formerly Twitter) have cast a revealing light upon the scale of the struggle between authentic users and automated manipulation. According to evidence given to the British Parliament, the company suspended roughly 800 million accounts over a twelve-month period for violations involving spam, automation or coordinated manipulation. Investigators described a “massive scale of attempts to manipulate the platform”, with Russia identified as the most prolific state actor, followed by Iran and China.
The sheer magnitude of this number is striking. It exceeds the number of genuine regular users commonly attributed to the service, which is often estimated in the several hundreds of millions. Such figures raise an obvious question: how can a social media platform accumulate so many fraudulent accounts, and why do hostile actors find it worthwhile to create them in such vast quantities?
The answer lies in the fundamental architecture of social media platforms and in the political and economic incentives that surround them.
Why X Attracts Spam Accounts
Social media networks such as X are structured around visibility. The platform rewards engagement with prominence: posts that are liked, reposted or replied to frequently become more visible in users’ feeds. This dynamic creates a powerful incentive to simulate popularity artificially. A network of automated accounts can inflate the visibility of particular messages, websites or commercial products, thereby manipulating the platform’s algorithmic priorities.
Historically, spam on social media was primarily commercial. Automated accounts promoted cryptocurrency scams, dubious financial products or fraudulent links intended to harvest personal information. These activities remain common. However the modern bot network has expanded far beyond crude advertising.
Political influence operations now form a substantial component of automated activity. Academic research has shown that bots may represent a significant proportion of participants in certain political discussions online, sometimes accounting for between roughly fifteen and forty per cent of accounts in specific debates. These accounts amplify narratives, promote hashtags and simulate grassroots opinion. A sufficiently large bot network can create the illusion of consensus or outrage where none actually exists.
State actors have learned that such campaigns are remarkably inexpensive compared with traditional propaganda or intelligence operations. A bot network capable of reaching millions of people can be created with modest technical expertise and little financial outlay. In the context of geopolitical competition, this represents an attractive tool.
Technological Changes That Fuel the Bot Explosion
Two recent developments have accelerated the proliferation of spam accounts.
The first is the increasing availability of automated account-creation tools. Historically, establishing a convincing bot network required labour: thousands of accounts needed profile photographs, biographies and plausible patterns of behaviour. Today these tasks can be automated using simple scripts and generative artificial intelligence. A program can now generate realistic profile pictures, write plausible messages and simulate human conversational patterns.
The second development is the growing role of generative language models themselves. A bot no longer needs to repeat identical messages. Modern systems can produce endless variations of a theme, making automated accounts appear far more human. This dramatically complicates detection.
Researchers studying bot behaviour have noted consistent linguistic differences between automated and human accounts, such as repetitive hashtags and rigid message structures. Yet the gap is narrowing as artificial intelligence improves.
The Political Origins of Bot Networks
Although commercial spam remains widespread, geopolitical manipulation has become increasingly significant. Intelligence agencies and influence operations now regard social media as a theatre of information warfare.
Recent reporting indicates that coordinated manipulation attempts on X have been linked to Russian, Iranian and Chinese actors, often operating through large networks of automated accounts. These networks may promote divisive political narratives, spread misleading information or amplify conspiracy theories. Their objective is rarely persuasion in the classical sense. Instead it is disruption. By flooding a platform with contradictory or inflammatory content, adversaries hope to weaken public trust in institutions and democratic debate.
This technique mirrors earlier propaganda strategies but operates at unprecedented scale and speed.
For a country engaged in active geopolitical competition, the investment required to operate a bot network is trivial compared with the potential strategic benefits. A small team of programmers can produce hundreds of thousands of automated accounts, each capable of interacting with millions of users.
The Structural Vulnerability of Open Platforms
X is particularly vulnerable to such manipulation because of its design philosophy. Unlike many other social networks, the platform historically permitted automated accounts to interact freely with human users through open programming interfaces. This openness helped create useful tools such as automated news feeds, weather alerts and customer-service bots. However it also allowed malicious actors to exploit the same infrastructure.
The platform’s emphasis on real-time public discussion further increases its attractiveness to manipulators. Trending topics and viral conversations can influence journalists, policymakers and investors. Consequently even modest bot activity can produce disproportionate political effects.
This vulnerability is not unique to X. Any large open network faces the same dilemma: the very features that make the platform dynamic also make it susceptible to exploitation.
Methods for Detecting Automated Accounts
Social media companies have developed a variety of techniques to identify suspicious accounts.
One approach relies upon behavioural analysis. Automated accounts often exhibit patterns that differ subtly from human activity. They may post at perfectly regular intervals, follow thousands of accounts within minutes of creation or retweet identical content simultaneously with other accounts. These behavioural signatures can be detected through statistical analysis.
Another method uses network mapping. Bot networks frequently interact with one another in tightly connected clusters. By analysing the structure of social interactions, investigators can identify groups of accounts that behave as a coordinated unit rather than as independent individuals.
Machine-learning algorithms now play a central role in this process. Deep-learning systems can analyse hundreds of behavioural features simultaneously and classify accounts as human or automated with impressive accuracy. Experimental systems have reported detection rates exceeding ninety per cent under controlled conditions.
Yet even these techniques face limitations. A sophisticated adversary can adapt quickly, modifying behaviour to mimic legitimate users. The contest between bot creators and platform moderators therefore resembles an arms race.
Protecting Legitimate Users
The challenge for social media companies is not merely detecting malicious accounts but doing so without harming genuine users. Excessively aggressive moderation risks suppressing legitimate speech, particularly for activists or journalists who may behave atypically on the platform.
Several approaches may help strike this balance.
One is progressive verification. Platforms can offer voluntary identity verification that grants certain privileges, such as increased visibility or the ability to send direct messages to strangers. Users who prefer anonymity may still participate but with fewer algorithmic advantages.
Another strategy involves limiting high-risk behaviours rather than banning accounts outright. For example new accounts might face restrictions on how many posts they can publish in rapid succession, or how many users they can follow within a short period.
Finally, transparency plays an important role. If users can easily see when an account was created, how frequently it posts and whether it has undergone verification, they may develop their own judgements about credibility.
The Future of the Bot Conflict
The battle against automated manipulation is unlikely to end. The economic and political incentives for creating bot networks remain powerful, while advances in artificial intelligence continue to make automation easier.
Nevertheless the large-scale suspension of hundreds of millions of accounts demonstrates that social media platforms remain capable of fighting back. Each purge raises the cost of manipulation and disrupts established networks.
The deeper question is whether the architecture of social media itself can evolve to accommodate this perpetual conflict. Platforms built upon openness and anonymity must now defend themselves against adversaries who exploit precisely those characteristics.
In the long term the struggle between authentic users and automated influence may become one of the defining features of the digital public sphere. The health of online political discourse may depend upon whether platforms can preserve openness whilst preventing manipulation on an industrial scale.
8 Views



