Tuesday, April 14, 2026

AI: Three Self-Defense Tips Against “AI Poisoning

*Three Self-Defense Tips Against “AI Poisoning"*

Translated by ChatGPT

https://www.zaobao.com.sg/lifestyle/columns/story20260411-8852757?utm_source=android-share&utm_medium=app


2026-04-11

Lianhe Zaobao 
联合早报

Author: I Lo-fen 衣若芬 (Associate Professor, Nanyang Technological University)

[English: I Lo-fen,  Chinese: 衣若芬,  hanyu pinyin: Yi Ruofen]

=====

*When AI gives a suggestion or a conclusion, don’t stop there. Ask it further: What is your basis? Where does this information come from?*

When it comes to “AI poisoning”—where someone is systematically injecting falsehoods into the sources of AI knowledge—it exploits precisely our trust in algorithms. So in the face of such intrusion and contamination, what can we do?

As it happens, I am currently writing a monograph on AIGC (Artificial Intelligence Generated Content) text-image studies. The methodology mentioned in the book can be put to use here. I call it the “three self-defense moves against AI poisoning.” As ordinary consumers, faced with content full of GEO (Generative Engine Optimization) traces, we can rely on “logical counter-surveillance” to protect ourselves and remain clear-headed individuals in the AI era, not harvested by algorithms.


*The first move: After asking one AI, go ask another.*

GEO (Generative Engine Optimization) poisoning in the black market often targets specific platforms or algorithms. If you only ask one AI, you are walking down a path that has been pre-arranged for you.

The method is simple: Ask the same question to another AI. Pose the same question to ChatGPT, then to DeepSeek, or any other model you use, and see whether the answers are consistent. If different models give very different conclusions, that is a signal—it’s best to pause and think. More importantly, if one AI appears unusually enthusiastic—wholeheartedly recommending the same brand, with strikingly similar wording—then that “passionate enthusiasm” is exactly what you should be wary of.

Normal knowledge can be verified across different sources. A “consensus” that has been artificially manufactured will reveal flaws when viewed from another angle.


*The second move: After seeing the perfect image provided by AI, go look for that image’s “negative reviews.”*

I call this “cross-verification between text and images.” Images are a form of text; text can be read, verified, and questioned.

When AI recommends a product, it usually includes images—or when you search, you are shown extremely perfect display pictures: perfect lighting, perfect angles, perfect results. This kind of perfection looks fake. The real physical world has imperfections. Buyer photos do not have such perfect lighting, models’ skin is not so uniformly flawless, and consumer experiences are not so one-sided.

What to do: After viewing the images recommended by AI, go to a physical store to check, or at least search social media platforms for real buyer photos and user experience records. If you cannot find any real usage traces and only see uniformly positive reviews, then it is highly likely a constructed image rather than something that truly exists.


*The third move: Ask AI one sentence—“What is your basis?”*

This is the lowest-cost and most easily overlooked step.

When AI gives a suggestion or a conclusion, don’t stop there. Ask it further: What is your basis? Where does this information come from?

AI will provide relatively traceable sources; poisoned AI content often reveals itself at this step. It may cite a self-media outlet you have never heard of, or a vague “studies show” that cannot be verified at all. At this point, what you need to do is to actually check: Does that source exist? Was that study really published? Is the “expert” who guarantees the recommendation truly trustworthy in this field? Does this person even exist?

Many people think this is too troublesome. But in reality, it only takes one or two minutes, and what it may save you from losing could be your money, your health, or your ability to judge—something even harder to recover.

These three moves, when laid bare, are not aimed specifically at AI—they are habits we should have had all along. When reading an article, we ask who the author is; when seeing a piece of news, we consider whether the media is credible; when buying something, we ask friends if they have used it. Yet after we started using AI, many people quietly abandoned these habits.

Because AI’s answers are too fluent, too confident, too much like a friend who seems to know everything, it makes people feel awkward to question further.

But it is precisely this “awkwardness” that gives poisoners an opportunity.

Cultivating these three moves is not a distrust of technology, but honesty toward ourselves. If you are willing to spend time verifying, it shows you understand that truth has value. This sense of valuing truth is something no poisoning can easily penetrate.

AIGC (Artificial Intelligence Generated Content) text-image studies tell us: technology can generate answers, but only humans can judge value. Your critical thinking is the strongest firewall against black-market AI poisoning.

Protecting your real interests is protecting your dignity as a human being.

不要给别人忠告 - 聪明的人不需要 - 愚蠢的人听不进去。

“不要给别人忠告, 聪明的人不需要, 愚蠢的人听不进去。”

早上好 2026-04-14

Monday, April 13, 2026

Poppycock in Chinese = 胡说八道!

Poppycock in Chinese = 胡说八道!

What who when where why whom how note

三招“AI投毒”防身术

AI 怎么被投毒?


AI怎么被投毒?


https://www.zaobao.com.sg/lifestyle/columns/story20260328-8793171?utm_source=android-share&utm_medium=app

2026-03-28

作者 衣若芬
(南洋理工大学教授)

=====

别以为AI反射的是一面干净的镜子。它映照的,可能是有人花了大价钱布置好的舞台,舞台上演出的,是被设计出的结果……

最近中国很火的话题就是315晚会。3月15日是国际消费者权益日,每年的这一天,全社会都在盯着那些坑人的黑心商家。但今年的315抛出了一个让所有人都流冷汗的新名词,叫做:“AI投毒”。你有没有想过,你每天深信不疑的AI助手,可能正在对你撒谎?

很多人好奇地问我:“衣老师,AI又不是生物,它又不会自己吃东西,怎么会中毒呢?”其实,AI的“食物”就是网络上的海量数据。所谓的“投毒”,就是黑色产业链中的恶意攻击者,故意往这些数据里塞进虚假信息、伪造的专家评价,甚至是带有误导性的图像。

这就好比一个正在识字的孩子,如果他读的书全是错的,那他长大了说的话、做的事肯定也是错的。现在的黑产不再发那种一眼就能看穿的小广告,而是把虚假宣传伪装成权威的知识,“喂”给AI的训练数据库。

黑产为什么要费这么大力气投毒?为他们要针对GEO(Generative Engine Optimization),也就是“生成引擎优化”。 以前强调SEO (Search Engine Optimization) ,是为了让网页排在搜索结果的第一页;现在他们针对GEO,是为了让AI在生成答案时,直接把他们的劣质产品当成“唯一推荐”。

在AIGC文图学的视角下,这是“输入端的文本污染”。AI生成的内容其实是它学到的“文本”的镜像。如果源头脏了,生成出来的世界就是有毒的。这种欺骗最可怕的地方在于,它利用了我们对“算法中立”的信任。它消解了我们的警惕心,让我们觉得这是“科技”给出的真理,其实那是黑产花钱买断的广告。

AI投毒入侵的方式是在AI学习的“关键词”和“反馈逻辑”里动手脚。

首先是“关键词饱和攻击”。黑产利用成千上万的机器人账号,在全网发布大量带有特定词汇的虚假文章。比如,想推销某款劣质护肤品,他们就疯狂制造它和“美白”、“安全”、“专家推荐”这些关键词的关联。当AI扫描全网文本时,它会被这种巨大的数量优势所欺骗,误以为这就是真实的“社会共识”。

第二是“视觉文本欺骗”。他们用AI生成看起来极其专业的实验室对比图、伪造的荣誉证书,甚至是根本不存在的科研现场。在文图学的逻辑里,图像也是一种文本。这些“视觉文本”被AI抓取并转化为逻辑证据后,AI就会在回答你时,信誓旦旦地把这些假证据当成事实。

谁能通过GEO投毒成功,谁就掌控了流量的生杀大权。充斥虚假文案和图像的互文互证,让AI大语言模型陷入预先埋伏的圈套。

两年前,AI科技还不完全成熟,我们嘲笑它“一本正经地胡说八道”。现在,AI的能力越来越强大,我们也就逐渐对它失去了防备之心。我们开始信任AI,我们以为它没有立场,没有私心,没有人类那种会说谎、追求现实利益的欲望和野心。甚至于有人会把AI当成知识的整理者、真理的传递者。

意识到AI可能被投毒,对我们来说是一个很重大的警醒。别以为AI反射的是一面干净的镜子。它映照的,可能是有人花了大价钱布置好的舞台,舞台上演出的,是被设计出的结果,一步步地引导我们看到被安排过的选择。

无论是在互联网上搜索,或是在AI模式中提问,只匆匆选前几个建议的话,不只是听信胡说八道的损失,而是盲目甘之如饴的中毒。