Tuesday, April 14, 2026

AI: Three Self-Defense Tips Against “AI Poisoning

*Three Self-Defense Tips Against “AI Poisoning"*

Translated by ChatGPT

https://www.zaobao.com.sg/lifestyle/columns/story20260411-8852757?utm_source=android-share&utm_medium=app


2026-04-11

Lianhe Zaobao 
联合早报

Author: I Lo-fen 衣若芬 (Associate Professor, Nanyang Technological University)

[English: I Lo-fen,  Chinese: 衣若芬,  hanyu pinyin: Yi Ruofen]

=====

*When AI gives a suggestion or a conclusion, don’t stop there. Ask it further: What is your basis? Where does this information come from?*

When it comes to “AI poisoning”—where someone is systematically injecting falsehoods into the sources of AI knowledge—it exploits precisely our trust in algorithms. So in the face of such intrusion and contamination, what can we do?

As it happens, I am currently writing a monograph on AIGC (Artificial Intelligence Generated Content) text-image studies. The methodology mentioned in the book can be put to use here. I call it the “three self-defense moves against AI poisoning.” As ordinary consumers, faced with content full of GEO (Generative Engine Optimization) traces, we can rely on “logical counter-surveillance” to protect ourselves and remain clear-headed individuals in the AI era, not harvested by algorithms.


*The first move: After asking one AI, go ask another.*

GEO (Generative Engine Optimization) poisoning in the black market often targets specific platforms or algorithms. If you only ask one AI, you are walking down a path that has been pre-arranged for you.

The method is simple: Ask the same question to another AI. Pose the same question to ChatGPT, then to DeepSeek, or any other model you use, and see whether the answers are consistent. If different models give very different conclusions, that is a signal—it’s best to pause and think. More importantly, if one AI appears unusually enthusiastic—wholeheartedly recommending the same brand, with strikingly similar wording—then that “passionate enthusiasm” is exactly what you should be wary of.

Normal knowledge can be verified across different sources. A “consensus” that has been artificially manufactured will reveal flaws when viewed from another angle.


*The second move: After seeing the perfect image provided by AI, go look for that image’s “negative reviews.”*

I call this “cross-verification between text and images.” Images are a form of text; text can be read, verified, and questioned.

When AI recommends a product, it usually includes images—or when you search, you are shown extremely perfect display pictures: perfect lighting, perfect angles, perfect results. This kind of perfection looks fake. The real physical world has imperfections. Buyer photos do not have such perfect lighting, models’ skin is not so uniformly flawless, and consumer experiences are not so one-sided.

What to do: After viewing the images recommended by AI, go to a physical store to check, or at least search social media platforms for real buyer photos and user experience records. If you cannot find any real usage traces and only see uniformly positive reviews, then it is highly likely a constructed image rather than something that truly exists.


*The third move: Ask AI one sentence—“What is your basis?”*

This is the lowest-cost and most easily overlooked step.

When AI gives a suggestion or a conclusion, don’t stop there. Ask it further: What is your basis? Where does this information come from?

AI will provide relatively traceable sources; poisoned AI content often reveals itself at this step. It may cite a self-media outlet you have never heard of, or a vague “studies show” that cannot be verified at all. At this point, what you need to do is to actually check: Does that source exist? Was that study really published? Is the “expert” who guarantees the recommendation truly trustworthy in this field? Does this person even exist?

Many people think this is too troublesome. But in reality, it only takes one or two minutes, and what it may save you from losing could be your money, your health, or your ability to judge—something even harder to recover.

These three moves, when laid bare, are not aimed specifically at AI—they are habits we should have had all along. When reading an article, we ask who the author is; when seeing a piece of news, we consider whether the media is credible; when buying something, we ask friends if they have used it. Yet after we started using AI, many people quietly abandoned these habits.

Because AI’s answers are too fluent, too confident, too much like a friend who seems to know everything, it makes people feel awkward to question further.

But it is precisely this “awkwardness” that gives poisoners an opportunity.

Cultivating these three moves is not a distrust of technology, but honesty toward ourselves. If you are willing to spend time verifying, it shows you understand that truth has value. This sense of valuing truth is something no poisoning can easily penetrate.

AIGC (Artificial Intelligence Generated Content) text-image studies tell us: technology can generate answers, but only humans can judge value. Your critical thinking is the strongest firewall against black-market AI poisoning.

Protecting your real interests is protecting your dignity as a human being.

No comments: