*"AI Parroting Humans" Could Stagnate Scientific Progress*
(Translated from an article in Lianhe Zaobao by DeepSeek)
https://www.zaobao.com.sg/forum/views/story20250403-6108022
2025-04-03
Song Ming Jia
(The author holds a Ph.D. in molecular genetics and is an associate professor at Monash University Malaysia.)
=====
AI cannot challenge traditional knowledge or concepts—it can only reinforce mainstream human views by repeatedly "parroting what humans say" and having humans "parrot what AI says." AI is also incapable of identifying potential biases in the scientific community. In the long run, if most college students of varying skill levels rely on AI to complete assignments and papers, academic standards and students' critical thinking abilities will become limited, or even stagnate.
Over the past three years, the emergence of generative AI tools like ChatGPT and DeepSeek has drawn widespread attention in education. The latter surpassed 100 million global downloads in its first week, far outpacing ChatGPT's "1 million" during the same period.
For university professors, these tools hold "limitless potential" in higher education. Within three months of ChatGPT's launch, I employed it as a "virtual teaching assistant" in my classes. At the same time, I pointed out that in many specialized fields, ChatGPT is a "charlatan-level teaching assistant" and encouraged students to challenge it—to identify errors or flaws in every answer it provides.
However, as AI technology continues to advance and improve, many students have become highly dependent on it for their coursework. In November 2023, *China Youth Daily* published a survey of 7,055 college students, revealing that 85% of respondents had used AI tools to complete assignments, with about 16% and 58% saying they used it "frequently" or "occasionally," respectively.
Among these students, around 46% used AI for "writing," while "information searching" (61%) and "translation" (58%) were the top two applications. Around the same time, BestColleges, a U.S.-based higher education information website, surveyed 1,000 undergraduate and graduate students, finding that 56% had used AI for assignments or exams.
In August 2024, the Digital Education Council released a global AI survey (covering 3,839 undergraduate, master's, and doctoral students across 16 regions), showing that about 86% of students used AI in their studies, with 78% using it at least daily or weekly. "Drafting initial versions" accounted for 24% of AI usage among respondents, while "searching for information" was the most common function (69%). "Grammar checking" and "summarizing documents" accounted for 42% and 33%, respectively.
For learning tasks like searching for information, translation, grammar checking, coding, and drawing, AI undoubtedly enhances students' efficiency and outcomes. I also support students using AI for "finding/correcting errors," particularly in coding courses and grammar corrections. In the past, students often struggled with minor coding mistakes (like missing a comma or colon), causing programs to fail. Such errors typically required significant time and effort to debug, but AI tools can quickly pinpoint issues, enabling efficient problem-solving.
However, what troubles professors most is students using AI to write assignments or papers on their behalf.
### Current AI is Still at a "Charlatan" Level
I firmly oppose undergraduate students using AI to write or complete assignments—even if only for generating outlines—because, in specialized fields, undergraduates lack the expertise to discern the accuracy or validity of concepts. Current AI lacks deep professional knowledge; without input from human experts, it tends to fabricate or force irrelevant content, behaving like a charlatan.
At least for now, ChatGPT and DeepSeek fall into this "charlatan AI" category. Moreover, using these tools to "generate assignments/papers" or "summarize documents" completely undermines the foundational training of independent thinking for beginners and hinders their learning objectives.
Most critically, for beginners in specific academic fields (especially undergraduates and below), generative AI often produces generic, unoriginal assignments or papers rather than creative content.
Over the past two years, my observations of student submissions have revealed many limitations in AI that "only the human mind" can overcome.
First, generative AI's "word prediction" and "fill-in-the-blank" training models can only predict, derive, rephrase, or simulate possible answers based on existing online data and knowledge structures (not original discoveries). They cannot produce innovative content. AI cannot account for all possible conditions, hypotheses, limitations, influencing factors, or scenarios, nor can it propose new research questions or challenge existing scientific arguments. It is also incapable of analyzing and synthesizing new data or ideas to arrive at truly creative, original insights.
In scientific research, challenging existing paradigms and conducting experiments to achieve a "paradigm shift" is paramount—breaking traditional concepts and theories to establish entirely new ways of thinking or approaches. However, constrained by its "word prediction" training model, AI cannot challenge conventional knowledge; it can only reinforce prevailing human views through endless cycles of "AI parroting humans and humans parroting AI."
Furthermore, AI cannot identify potential biases in the scientific community—instead, it amplifies and perpetuates biases in existing data and knowledge, leading to misleading conclusions.
In the long term, if most college students of varying levels rely on AI for assignments and papers, academic standards and students' thinking abilities will stagnate. Even if AI companies hire numerous professionals to input data for model training, the result will only match the current level of specialized knowledge among scientists—not groundbreaking original insights. True innovation and scientific progress still depend on human critical thinking, imagination, and creativity—something AI cannot replicate.
Regardless, students must understand policies on generative AI usage, and universities have a responsibility to establish clear guidelines so students know when they "can" or "cannot" use AI to assist with coursework.
The author holds a Ph.D. in molecular genetics and is an associate professor at Monash University Malaysia.