How Is AI Poisoned?
Translated by ChatGPT
https://www.zaobao.com.sg/lifestyle/columns/story20260328-8793171?utm_source=android-share&utm_medium=app
2026-03-28
Author: I Lo-fen (Chinese: 衣若芬; pinyin: Yi Ruofen)
(She is a Professor at Nanyang Technological University)
=====
Don’t assume that AI reflects a clean mirror. What it reflects may instead be a stage carefully set up at great expense by someone else—and what is performed on that stage is a designed outcome…
A recent hot topic in China is the “315 Gala.” March 15 is World Consumer Rights Day, when society focuses on unscrupulous businesses that exploit consumers. But this year’s 315 introduced a chilling new term: “AI poisoning.” Have you ever thought that the AI assistant you trust every day might actually be lying to you?
Many people ask me curiously, “Professor Yi, AI isn’t a living organism—it doesn’t eat. How can it be poisoned?” In fact, AI’s “food” is the massive amount of data on the internet. So-called “poisoning” refers to malicious actors in black-market industries deliberately injecting false information, fabricated expert reviews, and even misleading images into this data.
This is like a child learning to read—if all the books they read are wrong, then what they say and do as adults will also be wrong. Today’s black-market operators no longer rely on obvious, easily spotted advertisements. Instead, they disguise false promotions as authoritative knowledge and “feed” them into AI training databases.
Why do they go to such lengths to poison AI? Because they are targeting GEO (Generative Engine Optimization). In the past, the focus was on SEO (Search Engine Optimization), to get web pages ranked on the first page of search results. Now, with GEO, the goal is to make AI directly present their inferior products as the “only recommendation” when generating answers.
From the perspective of AIGC text-and-image studies, this is “input-side text pollution.” The content generated by AI is essentially a mirror of the “texts” it has learned. If the source is contaminated, the generated world will also be toxic. The most frightening aspect of this deception is that it exploits our trust in the “neutrality of algorithms.” It lowers our guard, making us believe this is “truth” produced by technology—when in fact it is advertising bought and paid for by black-market players.
AI poisoning works by tampering with the “keywords” and “feedback logic” that AI learns from.
First is the “keyword saturation attack.” Black-market operators use thousands of bot accounts to publish large volumes of fake articles containing specific keywords across the internet. For example, to promote a low-quality skincare product, they aggressively associate it with terms like “whitening,” “safe,” and “expert-recommended.” When AI scans the web, it is misled by this overwhelming volume and mistakes it for genuine “social consensus.”
Second is “visual text deception.” They use AI to generate highly professional-looking laboratory comparison images, fake certificates, and even entirely fabricated research settings. In the logic of text-and-image studies, images are also a form of text. Once these “visual texts” are captured and converted into logical evidence, AI will confidently present these false proofs as facts when answering your questions.
Whoever succeeds in GEO poisoning gains control over the flow of traffic and influence. The mutual reinforcement of fake text and images traps large AI language models in carefully planted snares.
Two years ago, when AI technology was still immature, we laughed at it for “talking nonsense with a straight face.” Now, as AI grows more powerful, we have gradually lowered our guard. We begin to trust AI, believing it has no stance, no self-interest, no human tendency to lie or pursue material gain. Some even regard AI as an organizer of knowledge and a transmitter of truth.
Realizing that AI can be poisoned is an important wake-up call for us. Don’t assume AI reflects a clean mirror. What it shows may be a stage someone has spent heavily to construct, where every outcome is designed—guiding us step by step toward prearranged choices.
Whether searching the internet or asking questions in AI systems, if we simply pick the first few suggestions without thinking, the loss is not just being misled by nonsense—it is blindly and willingly consuming the poison.

No comments:
Post a Comment