Thursday, April 3, 2025

宋明家:“AI云人亦云”会让科学停滞不前

宋明家:“AI云人亦云”会让科学停滞不前
https://www.zaobao.com.sg/forum/views/story20250403-6108022

2025-04-03


AI无法挑战传统知识或概念,只能一再“人云AI亦云、AI云人亦云”的强化人类当前的主流看法。AI也无能辨别科学界可能存在的一些偏见。长远来看,当不同水平的大部分大学生,依赖AI完成作业和论文,学界的学术水平、大学生思维能力就会受到局限,甚至停滞不前。

近三年内,诸如ChatGPT、DeepSeek等生成式人工智能(Generative AI)软件的出现,引发教育界广泛关注;后者全球下载量在发布第一周就突破1亿,远远超过ChatGPT同期的“100万”表现。

对大学教授而言,这些软件在大学教育具有“无限潜能”;笔者自ChatGPT面世后的三个月内,就已在课堂聘用它为“虚拟教学助手”,同时也点明ChatGPT在许多专业知识领域里,是属于“仙家级别的教学助手”,并要求学生挑战它,对它提供的每一个答案找出错误或破绽。

但随着AI技术的不断更新和效能提升,许多大学生开始在课业上高度依赖它。2023年11月,《中国青年报》刊登一项针对7055名大学生的问卷调查,结果显示85%受访者曾使用AI工具来完成课业,而约16%和58%的人分别说“经常使用”和“偶尔使用”。

在这些大学生当中,有约46%用AI来“写作”,而“资料查询”(61%)和“翻译”(58%)则是排名第一和二的两大用途。大约在同个时候,美国一个提供高等教育资讯的BestColleges网站,也曾对1000名本科和研究生进行调查,结果显示56%大学生曾使用AI来完成作业或考试。

2024年8月,数码教育委员会(Digital Education Council)发布的全球AI调查(涵盖16个地区共3839名本科、硕士、博士生)表明,约86%学生表示他们在学习中使用AI技术,其中78%受访者至少每天或每周使用一次。“创建初稿”这AI用途在受访者群体占24%;“搜索信息”是最多人(69%)使用的AI功能;“检查语法”和“总结文档”分别为42%和33%。

在搜寻资料、翻译、检查语法、编码、绘图等学习用途上,使用AI无疑能提升学生的学习能力和效果。笔者也认同大学生使用AI来进行“寻找/修正错误”的任务,其中编码课程、语法修正等任务是佼佼者。过去学生在编写程序代码时,常因细微错误(如少了一个逗号或冒号),而导致程序无法正常运行。在大多数情况下,这须要耗费大量时间和精神去侦察错误,但AI工具可以快速指出错误,帮助我们高效解决问题。

但最令教授头疼的,是学生使用AI代写作业或论文的问题。

当前AI还属于“仙家”水平
笔者绝对不认同让本科水平的大学生使用AI来书写、完成作业,即便只是依靠AI来提供报告或作业的大纲,因为在特定专业领域里,本科水平的学生不具备分辨概念真伪、对错的专业能力。当前的AI还是无法具备深厚的专业知识,因为缺乏人类专家输入这些特定专业知识,它们还是会仙家地胡扯或硬扯风马牛不相及的内容。

至少在现阶段,ChatGPT和DeepSeek,都属于这类“仙家”AI。同时,使用这些AI来“生成作业/论文”“总结文档”,也完全违背培育新手独立思考的基础训练,并妨碍学生的学习目的。

最关键的是,对学术界特定领域的初学者而言(尤指大学本科和以下水平的学生),在大多数情况下,生成式AI只能写出千篇一律、毫无新意的作业或论文,而不是具有创造性的内容。

笔者这两年多观察学生提交的作业和论文,发现AI存在“只有人类头脑”可以突破的诸多局限。

首先,生成式AI的“接龙”和“填空题”模型训练,只能够遵循现有网上数据和知识结构(而非原创发现),去预测、衍生、组合、改写、模拟出可能的答案,而不能生产出创新内容。AI无法涵盖所有可能条件、假设、局限、影响因子或情境,也无法提出新的研究问题,或挑战现有科学论据,更无法分析和整合新数据、新思想,去得出真正具创造力的原创性看法。

在科学研究里,挑战现有范式、进行科学试验去达致“范式转移”(paradigm shift)是重中之重,亦即打破传统既有的概念和理论,做出全新思维或行为方式。但拘于前述“接龙”模型训练模式,AI无法挑战传统知识或概念,只能一再“人云AI亦云、AI云人亦云”的强化人类当前的主流看法。

再者,AI也无能辨别科学界可能存在的一些偏见,反而只会强化、延续现有数据和知识的偏见,导致具有误导性的结论。

长远来看,当不同水平的大部分大学生,依赖AI完成作业和论文,学界的学术水平、大学生思维能力就会受到局限,甚至停滞不前。即便AI公司聘请大量专业人士来输入资讯以进行AI模型训练,也只能达到和当前科学家同等水平的特定知识,而不是更上一层楼的原创性知识。真正的创新和科学进展,还是必须仰赖人类的批判性思维、想象力和创造力,这是AI所办不到的。

不管怎样,大学生都必须明白生成式AI的使用政策,而个别大学也有义务去制定明确政策,让学生更清楚知道,什么情况下“能”或“不能”使用AI来协助完成课业。

作者是分子遗传学博士、马来西亚蒙纳士大学副教授

社交媒体是青少年心理健康的双刃剑

Zaobao
医生执笔

社交媒体是青少年心理健康的双刃剑

李清副教授(心理卫生学院高级顾问心理医生,人口心理健康教育办事处医务总监)
发布/2025年4月1日 05:00
社媒可能导致的心理健康问题包括:焦虑、抑郁、睡眠质量差、孤独、自卑、自恋、强迫行为、成瘾和形象问题等。当青少年无法连接上互联网或社交网站时,他们会感到担心或不舒服。
青少年在使用社媒时碰到的心理健康问题取决于多种因素,包括青少年的成熟度,已有的心理健康状况,个人价值观和生活环境。 (iStock图片)






近期,有很多关于社交媒体如何影响心理健康的讨论。成年人尚且无法克制社媒的使用乃至成瘾,更何况自制力不强的青少年,所以有些国家甚至考虑采取最低年龄限制来统一控制。

社媒让用户可通过电子通信形式共享个人信息、想法和其他内容如照片和视频等。 在X、LinkedIn、Instagram、YouTube等社交平台,多数用户每天都会活跃于他们喜欢的平台上。脸书依然是使用最广泛的社媒平台,用户中有四分之三每天至少登上该网站一次。

青少年在使用社媒时碰到的心理健康问题取决于多种因素,包括青少年的成熟度,已有的心理健康状况,个人价值观和生活环境。社媒可能导致的心理健康问题包括:焦虑、抑郁、睡眠质量差、孤独、自卑、自恋、强迫行为、成瘾和形象问题等。当青少年无法连接上互联网或社交网站时,他们会感到担心或不舒服。

李清医生说,除了心理健康问题外,青少年也可能因长时间久坐和少活动引发其他健康问题,如常见的肥胖症、高血压等。(受访者提供)
李清医生说,除了心理健康问题外,青少年也可能因长时间久坐和少活动引发其他健康问题,如常见的肥胖症、高血压等。(受访者提供)

被不切实际的帖子误导

研究结果显示,每天花三小时使用社媒与出现心理健康问题的风险较高相关,心理健康状况更差,幸福感更低。社媒的负面影响通常归因于帖子中不切实际的描述,引导一些青少年形成对他人生活或身体的非真实看法——爱慕虚荣,感觉不够格,不比他人好。此外,社媒上的某些冒险相关内容以及负面帖子或互动,可能导致青少年的自我伤害或伤害他人,或鼓励他们与进食障碍相关的习惯,如饮食障碍症。

青少年也可能碰上网上骚扰和网络欺凌的风险。他们在社媒上可能分享非常私人的事;或在愤怒或沮丧之下不加思考发布一些信息或私密照。社媒平台上龙蛇混杂,在线猎手会试图利用、骚扰、勒索或敲诈这些青少年。青少年也可能接触与歧视、仇恨或网络欺凌相关的内容,增加焦虑或抑郁的风险。

除了心理健康问题外,青少年也可能因长时间久坐和少活动引发其他健康问题,如常见的肥胖症、高血压等代谢综合症。

善用社媒可带来正面影响

当然社媒也可能为青少年的心理健康带来正面影响。通过社交网络,他们能与有相同爱好或经历的人彼此提供支持;尤其是当青少年在线下缺乏社会支持或感到孤独,正在承受巨大压力,或是属于经常被边缘化的群体。在这种情况下,社媒可能是一个很好的资源,提升自我表达能力,自我认同,学习机会,分享创意项目或与失散已久的朋友重新联系与沟通。青少年也可通过社媒查看或参加鼓励公开讨论心理健康等主题的适当聊天论坛,并针对心理健康状况的症状寻求帮助。

随着技术创新的不断发展,人们与社媒的互动方式正在迅速改变,例如学校也使用数码产品作为辅助教学。家长在防止社媒妨碍青少年活动、睡眠、饮食或学习时,可以设定每日的使用时间限制,或选择在特定时间段禁止使用社媒,例如家庭用餐时间和睡前一小时。家长应定期与孩子沟通社媒相关事宜,鼓励孩子说出是否遭遇网络上的烦恼或困扰。家长更应该以身作则,为孩子树立榜样,设定清晰的社媒界限,甚至考虑牺牲自己对社媒的使用。


订户专享 赠阅文章
选择赠阅文章将生成赠阅链接,您本月的余额将减少一次。链接分享期限为 30 天。





您之前已生成过这篇文章的赠阅链接,您本月的余额保持不变。了解更多

"AI Parroting Humans" Could Stagnate Scientific Progress*

*"AI Parroting Humans" Could Stagnate Scientific Progress*

(Translated from an article in Lianhe Zaobao by DeepSeek)

https://www.zaobao.com.sg/forum/views/story20250403-6108022  

2025-04-03  


Song Ming Jia
(The author holds a Ph.D. in molecular genetics and is an associate professor at Monash University Malaysia.)

=====

AI cannot challenge traditional knowledge or concepts—it can only reinforce mainstream human views by repeatedly "parroting what humans say" and having humans "parrot what AI says." AI is also incapable of identifying potential biases in the scientific community. In the long run, if most college students of varying skill levels rely on AI to complete assignments and papers, academic standards and students' critical thinking abilities will become limited, or even stagnate.  

Over the past three years, the emergence of generative AI tools like ChatGPT and DeepSeek has drawn widespread attention in education. The latter surpassed 100 million global downloads in its first week, far outpacing ChatGPT's "1 million" during the same period.  

For university professors, these tools hold "limitless potential" in higher education. Within three months of ChatGPT's launch, I employed it as a "virtual teaching assistant" in my classes. At the same time, I pointed out that in many specialized fields, ChatGPT is a "charlatan-level teaching assistant" and encouraged students to challenge it—to identify errors or flaws in every answer it provides.  

However, as AI technology continues to advance and improve, many students have become highly dependent on it for their coursework. In November 2023, *China Youth Daily* published a survey of 7,055 college students, revealing that 85% of respondents had used AI tools to complete assignments, with about 16% and 58% saying they used it "frequently" or "occasionally," respectively.  

Among these students, around 46% used AI for "writing," while "information searching" (61%) and "translation" (58%) were the top two applications. Around the same time, BestColleges, a U.S.-based higher education information website, surveyed 1,000 undergraduate and graduate students, finding that 56% had used AI for assignments or exams.  

In August 2024, the Digital Education Council released a global AI survey (covering 3,839 undergraduate, master's, and doctoral students across 16 regions), showing that about 86% of students used AI in their studies, with 78% using it at least daily or weekly. "Drafting initial versions" accounted for 24% of AI usage among respondents, while "searching for information" was the most common function (69%). "Grammar checking" and "summarizing documents" accounted for 42% and 33%, respectively.  

For learning tasks like searching for information, translation, grammar checking, coding, and drawing, AI undoubtedly enhances students' efficiency and outcomes. I also support students using AI for "finding/correcting errors," particularly in coding courses and grammar corrections. In the past, students often struggled with minor coding mistakes (like missing a comma or colon), causing programs to fail. Such errors typically required significant time and effort to debug, but AI tools can quickly pinpoint issues, enabling efficient problem-solving.  

However, what troubles professors most is students using AI to write assignments or papers on their behalf.  

### Current AI is Still at a "Charlatan" Level  
I firmly oppose undergraduate students using AI to write or complete assignments—even if only for generating outlines—because, in specialized fields, undergraduates lack the expertise to discern the accuracy or validity of concepts. Current AI lacks deep professional knowledge; without input from human experts, it tends to fabricate or force irrelevant content, behaving like a charlatan.  

At least for now, ChatGPT and DeepSeek fall into this "charlatan AI" category. Moreover, using these tools to "generate assignments/papers" or "summarize documents" completely undermines the foundational training of independent thinking for beginners and hinders their learning objectives.  

Most critically, for beginners in specific academic fields (especially undergraduates and below), generative AI often produces generic, unoriginal assignments or papers rather than creative content.  

Over the past two years, my observations of student submissions have revealed many limitations in AI that "only the human mind" can overcome.  

First, generative AI's "word prediction" and "fill-in-the-blank" training models can only predict, derive, rephrase, or simulate possible answers based on existing online data and knowledge structures (not original discoveries). They cannot produce innovative content. AI cannot account for all possible conditions, hypotheses, limitations, influencing factors, or scenarios, nor can it propose new research questions or challenge existing scientific arguments. It is also incapable of analyzing and synthesizing new data or ideas to arrive at truly creative, original insights.  

In scientific research, challenging existing paradigms and conducting experiments to achieve a "paradigm shift" is paramount—breaking traditional concepts and theories to establish entirely new ways of thinking or approaches. However, constrained by its "word prediction" training model, AI cannot challenge conventional knowledge; it can only reinforce prevailing human views through endless cycles of "AI parroting humans and humans parroting AI."  

Furthermore, AI cannot identify potential biases in the scientific community—instead, it amplifies and perpetuates biases in existing data and knowledge, leading to misleading conclusions.  

In the long term, if most college students of varying levels rely on AI for assignments and papers, academic standards and students' thinking abilities will stagnate. Even if AI companies hire numerous professionals to input data for model training, the result will only match the current level of specialized knowledge among scientists—not groundbreaking original insights. True innovation and scientific progress still depend on human critical thinking, imagination, and creativity—something AI cannot replicate.  

Regardless, students must understand policies on generative AI usage, and universities have a responsibility to establish clear guidelines so students know when they "can" or "cannot" use AI to assist with coursework.  

The author holds a Ph.D. in molecular genetics and is an associate professor at Monash University Malaysia.

Wednesday, April 2, 2025

I AM CHINESE NAVY (Starting from 31-3-2025)

Tuesday, April 1, 2025