https://www.straitstimes.com/opinion/disinformation-now-has-a-new-channel-ai-chatbots
2025-07-23
For subscribers
Jonathan Eyal
Global Affairs Correspondent
The Straits Times
=====
As generative artificial intelligence (AI) technologies rapidly evolve, the general assumption of most regular users is that the AI chatbots and their programs that explore the web, indexing and collecting data as they go along, are differentiated only by their efficiency: some are better than others at answering queries, and some “hallucinate” less than others.
Yet it is now becoming clear that AI bots are not only vulnerable to the inherent biases of those who wrote the programs; they are also susceptible to political biases and manipulation by governments. So, what we get when we search online now is less an impartial summary of available information and more a skilfully curated narrative that may well skew reality, either intentionally or unintentionally.
What is certain, however, is that the internet as we know it is about to undergo a radical transformation. And the battle for impartial information and verifiable facts has never been more urgent, or more desperate.
Inheriting the sins of fathers
In theory, artificial intelligence is neutral. It is built on the probabilistic and continuous analysis of vast datasets and functions, and it draws conclusions from large quantities of text and statistical data. Since ChatGPT was deployed in November 2022 as the first widely and commercially available AI chatbot, we have been told persistently by its programmers and by those who championed alternative chatbots that their outputs are value- and bias-free.
It is by now evident that this claim of intellectual neutrality is simply untrue, for at least two straightforward reasons. The chatbots were constructed by people who have their own ideas of what neutrality is; in this context, the “sins” of fathers are certainly inherited by their “children”.
And just as significantly, most of the data that chatbots process is itself the product of previous cultural biases, such as those about race, skin colour or sexual preferences. Most of this bias is subtle. Just think of the terms “whitelisting” and “blacklisting”, still frequent cyber-security terminologies, to see the implied racial stereotyping many of us still use without even thinking about it: Whitelist is a positive concept, while blacklist is negative.
To mitigate the bias inherent in existing data, all AI models undergo a second training phase called “fine-tuning”, where human trainers intervene to teach their systems which answers are acceptable. So, if you were to ask an AI chatbot to tell you the difference between, say, heterosexuals and homosexuals, the bot is trained to concentrate on providing scientific and socially valid answers, and discard all the pseudoscientific and hateful rubbish.
But although this is touted as an added safeguard, the practice of fine-tuning an AI chatbot only adds another layer of potential bias, and sometimes with comical results. In February 2024, for instance, users discovered that if they asked Gemini to generate a picture of the Pope, Google’s AI tool offered several potential graphic illustrations, including one of a woman as the universal head of the Catholic Church. The system also provided pictures of the founding fathers of the United States as black or Latino men, because it was programmed to emphasise gender and racial diversity.
Google quickly updated the algorithm, although the company is still accused of rewriting history to suit its alleged political preferences. Mr David Sacks, US President Donald Trump’s chief AI coordinator and Mr Sriram Krishnan, another senior White House policy adviser on AI, are reported to be behind a decree which President Trump will soon issue, barring any AI chatbot that provides “politically correct” answers from US government contracts. Mr Sacks has frequently dismissed chatbots, which he claims are “woke” and “left-leaning”. “Their censorship,” he claimed, was “built into the answers”.
But the US presidential advisers should be careful what they wish for, since AI chatbot programs that compensate for the alleged left-wing biases of California’s Silicon Valley are not much more credible either. In early July, for instance, Grok, the chatbot owned by xAI, Mr Elon Musk’s artificial intelligence company and touted as being the antidote to left-wingers, returned rogue answers which, among others, presented in a positive light Nazi Germany’s “immigration controls”. The search for the perfect, value-neutral AI chatbot remains an illusion.
Grooming the narrative: Government Interference
But what if what we still regard as just the inherent biases of AI chatbots are about to be amplified by efforts of foreign governments to skew their output even further? Several recent studies suggest that this is precisely what is happening, on a grand scale.
All AI chatbots rely on large language models (LLMs), which are trained using vast amounts of text data to understand and manipulate the subtleties of human language and logic, thereby generating useful answers. As such, they operate on material that is available online. And that opens the way for governments to flood cyberspace with material, which, over time, will seep into LLMs, helping to change the narrative.
Ms Sophia Freuden, a researcher for America Sunlight Project, a non-profit, US-based organisation devoted to identifying and combating online misinformation, believes that she has already identified at least one government engaged in such practices: Russia.
Her attention was drawn to an online network entitled “Pravda”, which translates from the Russian language as “Truth”. And what she saw there was puzzling. The Pravda network comprises a collection of nearly identical websites and social media accounts that aggregate extensive texts of Russian propaganda, as well as the output of known Russian disinformation campaigns. The Pravda network, which was first aimed at Europe, expanded rapidly to reach Africa, the Asia-Pacific, and North America.
Yet, oddly, the network does not produce any original news content of any kind; it merely reposts content from primary sources, mainly Russian state media outlets such as Russia Today or Sputnik, or pro-Russian content taken from Telegram, a messaging app popular among Russians.
Pravda’s websites “are not a pleasant or easy-to-read experience for humans. Content is often overlapping. Text isn’t legible. There are blatant auto translation errors. There is no search function, and the scrolling function that you use to just navigate up and down a page often doesn’t work,” Ms Freuden explained in a recent webinar for cyber-security specialists.
Most mysteriously still, the Pravda network has almost no traffic, no users to read its content. So, “why make such a large, centralised aggregator with no human audience? Why go through the effort of growing this geographically and thematically, when no one is seemingly navigating to their website?” Ms Freuden wondered.
The answer seems to be that Russia’s Pravda network is not aimed at humans at all, but at the AI bots “scraping” its content. The primary purpose of this exercise is to flood LLMs with new text content, in the hope that this will influence their generative models and prompt AI chatbots into repeating Russian information and curated world views. Ms Freuden has invented a name for this tactic: LLM grooming.
More On This Topic
Some chatbots tell you what you want to hear. This is dangerous
Meet agentic AI, chatbots that can decide on your behalf – for better or worse
And the quantity of written material the Pravda network generates is astonishing. “Our estimates put the publishing rate of the network at a minimum of 3.6 million articles per year,” says Ms Freuden, adding that “this is almost assuredly a gross underestimation of the network’s true publishing rate”.
Her findings are corroborated by a study published in June by the Royal United Services Institute, a British-based defence think-tank, which stated that “Russian actors” are “injecting” propaganda and other allegedly biased material to influence the output of AI chatbots. “Designed to skew the outputs of LLMs, this tactic represents a shift from targeting audiences directly to subtly shaping the tools these audiences use,” concluded the authors of the report.
There are plenty of reasons why Russia finds this “LLM grooming” a promising occupation. The effort is inexpensive: setting up a network of websites that amplifies existing material is dirt-cheap, and it can be translated and repackaged in a multitude of formats and countries worldwide. The opportunities of flooding the internet are virtually limitless.
But affecting the outcome of search results produced by AI chatbots is priceless, because a subtle change in the tone of presenting facts is much more persuasive than a naked propaganda blast. If Russia succeeds in influencing LLMs, it could transform fiction into facts, and skew debates to its advantage. History can, quite literally, be rewritten.
New internet, new world
Of course, some would argue that Western think-tanks are bound to point an accusing finger at Russia for a tactic that, for all we know, is also being pursued by many Western governments. Yet, this does not alter the fundamental fact that LLMs can be, and likely are, influenced by governments in subtle, imperceptible ways. Nor does it alter the prognosis that this tendency will only increase with time.
The process is rendered even more ominous by another related development: the way AI is now bypassing and marginalising media organisations and other traditional content creators.
For decades, websites welcomed the “crawlers” of search engines such as Google, which used to list and rank them for users; armies of technical consultants were available to advise companies and content-creators on how to ensure a higher ranking in search engines, which would result in a higher volume of visitor traffic.
However, now, generative AI tools scrape the contents of an entire website, regurgitate it, summarise it, and serve the answers to their users, often without even referencing who created the material in the first place. Therefore, the number of human visits to websites is declining rapidly, and with this comes the risk of declining revenue.
Some media websites are fighting back by negotiating fees to allow AI tools to scrape their content. But many are also creating two parallel websites: one with premium content for their human users, and another stripped-down version for AI scraping. The result may well be the emergence of two internet environments, and a massive fragmentation of the information space.
More On This Topic
In the age of AI, it’s our human traits that will help us thrive
Beware of rogue chatbots that introduce security risks; firms should test AI systems frequently
And the consequences of these trends could be grim. The monumental clash between claims and counterclaims on almost every topic, and the potential split of the internet, will further strengthen the already acute polarisation of political debate by encouraging separate universes. What one person considers reality, the other dismisses as utter fiction.
It also risks promoting disengagement from public life and public service. Suppose everything is challenged and nothing is what it seems; what is the point of taking part in any civic activity, or taking part in elections whose results are, in any case, likely to be hotly contested for years thereafter?
Answers to such dangers do exist. Legislation could require tech companies to verify the authenticity of their sources. Customers will learn how to shop around for the most consistently reliable AI chatbot. And governments can promote greater information literacy by helping people to approach any information provided with circumspection and a critical eye.
Still, everything starts with understanding the tremendous opportunity, as well as the serious security challenge presented by the advent of AI chatbots, which we now use in increasing numbers.
Jonathan Eyal is based in London and Brussels and writes on global political and security matters.

No comments:
Post a Comment