Wednesday, May 13, 2026

Beyond social media bans: Building a safer digital world for children

*Beyond social media bans: Building a safer digital world for children*

https://www.straitstimes.com/opinion/beyond-social-media-bans-building-a-safer-digital-world-for-children

2026-05-12

By--- Josephine Teo is Singapore’s Minister for Digital Development and Information.

=====

Many countries have banned or announced plans to ban social media for children.

What is Singapore’s approach?

At this stage, we are keeping our options open.

A ban sends a strong and simple signal: society does not accept the way social media has taken over the lives of many children.

Even if some find their way around the rules, the message is clear – there are better ways for children to be spending their time.

But if we accept that the digital experience is an integral part of children’s lives, bans alone may not bring about real changes in how children interact with social media or develop healthier habits online.

More research is needed to pinpoint exactly which aspects of social media needs fixing to be childsafe. In the meantime, there is growing evidence that certain features of social media can cause harm.

We know repeated exposure to excessive violence normalises aggression. We are troubled that paedophiles can abuse online anonymity to gain trust and groom our children. If adults struggle to resist the lure of algorithmic feeds and the temptation to watch one more video, what more our children?

Different offerings for different ages
For children under 13, our position is more straightforward. Young children should not be on social media platforms designed for older users.

In fact, the social media platforms’ terms of service already prohibit this, but they have little incentive to properly verify if a user is underaged. Governments must therefore step in to hold platforms accountable for implementing robust age assurance measures. This is an important first step we will require of designated social media services to keep users under 13 off their platforms.

For older children past a certain age, most parents recognise that it is neither realistic nor practical to keep them away from social media altogether, any more than we can curb their curiosity about popular music, celebrities or relationships. What parents want is for their children to have age-appropriate experiences on social media, like how children may watch a variety of films that are suitable for their ages, but not films with mature themes.

This is why some are asking whether banning older children from social media makes the environment safer, or whether it leaves them less prepared to navigate it later on. Could better outcomes be achieved through feature-based regulations that ensure age-appropriate experiences for all children?

Our thinking is to make social media platforms safer for older children – all the way up to age 18. This could mean offering them a differentiated service, one with “training wheels” before they acquire the maturity to independently navigate the full range of features on each platform.

Safeguards still needed
Some safeguards are clearly necessary and useful. For example, unsolicited messages sent by adult strangers to young users must be prohibited. Addictive design features that extend time spent on platforms should also be addressed.

In March this year, Los Angeles found Meta and Google liable for deliberately designing addictive features, like infinite-scroll feeds and autoplay functions, that had negatively affected children’s well-being.

New York has also passed legislation to prohibit platforms from providing addictive feeds to users under 18 without parental consent.

Undoubtedly, parents in the digital era also need better support to guide their children to make sense of their experiences online. Each family will want to develop their own rules, so children can cultivate digital habits consistent with their own values.

This is where platforms should have an obligation to provide parents with better tools, clearer information, and a simpler way to assess whether their safety designs are adequate.

Platforms may not be equally responsive to these expectations. Nor can we guarantee that every platform can be made safe enough for every child.

Ultimately, it is for each platform to decide whether it will modify its design so that older children in Singapore can access it safely. Those unwilling to do so will effectively be excluded from serving young users here. But for the platforms that are willing, we can work together to create a healthy, child-safe digital environment.

If we succeed, these safeguards may provide more effective scaffolding for children to navigate the digital world more safely. That is a goal worth striving for.

Josephine Teo is Singapore’s Minister for Digital Development and Information.


More On This Topic

Minors’ online safety: The options beyond Australia’s social media ban

Social media bans are unlikely to work. So how can we keep young people safe online?

Monday, May 11, 2026

Saturday, May 9, 2026

Singapore Combat Engineers (From BCC)

当智能不再稀缺,教育应培养什么

当智能不再稀缺,教育应培养什么?

(供订户阅读)

https://www.zaobao.com.sg/forum/views/story20260507-9013594?utm_source=android-share&utm_medium=app

2026-05-07

作者:梁忠伟博士

(梁忠伟博士是AI企业Dorje AI的首席执行官和新加坡国立大学商学院兼任副教授)

=====

AI让探究变得容易,但明辨是非的能力仍然稀缺。

我第一次向6岁儿子展示ChatGPT时,屏幕上迅速显示出一个结构完整、内容详尽的答案。他的第一反应不是惊叹,而是质疑:“这个答案是从哪里来的?”这种本能的怀疑精神,恰恰是我们的教育体系应培养的能力。然而现实是,我们花十几年时间训练学生积累答案,并在考试中复述它们。当人工智能(AI)可随时生成答案时,这个模式已经失效。原因不在于技术,而在于经济学。

AI正在重新定义知识的价值。当推理模型能够起草法律备忘录、调试代码、总结研究报告时,拥有这些知识的市场价值必然下降。稀缺的不再是智能,而是判断力:提出正确问题、验证输出结果、在不确定中做出决策的能力。

劳动力市场已在反映这一变化。哈佛大学研究人员追踪美国28万5000家企业的6200万名工人,发现在积极采用AI的企业中,初级岗位的雇用量在六个季度内下降7.7%,而高级岗位的雇用量继续上升。斯坦福大学研究也显示,22岁到25岁软件开发人员的就业率,从2022年峰值下降近20%。企业并非在解雇初级员工,而是不再发布初级岗位。职业阶梯的底层正在消失。

在中国,这一趋势同样明显。2026年应届毕业生将达1270万人,据报道,一家主要招聘平台的数据显示,2025年上半年面向应届毕业生的岗位发布量同比下降22%。与此同时,美团和饿了么(当前名称: 淘宝闪购 Taobao Flash Shopping)平台上超过20%的骑手拥有大学学历,至少7万名骑手拥有硕士学位。“努力学习→好成绩→好工作”的社会契约正在动摇。

在教育界,关于是否应禁止学生使用AI工具的争论已持续多时。禁止学生使用未来雇主必然要求他们掌握的工具,不是保护,而是一种不负责任的做法。真正的问题不是学生是否应使用AI,而是当他们使用时,我们如何评估他们。

我在新加坡国立大学商学院的数码化转型课程中做了一个实验。学生须提交一篇两页的案例分析文章,同时必须提交所有使用的AI工具和提示词。在前两届120名学生中,只有一人获得优等。规律很明显:较弱的学生使用不到五次提示词,生成的文章信息堆积却无法区分轻重。他们把AI当作自动售货机。提示词的质量与文章质量直接相关。

我故意在作业简介中埋入错误数据,这是我在课堂上反复告诫学生的:不要信任任何输出,要验证一切。整个学生群体中,只有获得优等的那一名学生认真对待这个告诫。这个结果告诉我们:教育体系训练学生信任权威,而非质疑权威。这与格物致知的精神背道而驰。

在国大读物理时,有一位教师范清鸿教授,他的电磁学考试是开卷的,学生可以带所有教科书和解题集进考场。这是AI出现之前很久的事,但范教授的做法在今天看来具有超前的远见。他当年就明白,教育的价值不在于记住答案,而在于培养推理能力。这恰恰是我们今天最需要的考评创新。今天,推理模型能够解决那些物理题,但知道哪个方程适用于哪种物理情境——在伸手拿工具之前看清问题的本质——仍然是不可替代的人类能力。

《大学》讲格物致知,本意从来不是死记硬背,而是通过探究事物达到真知。AI让探究变得容易,但明辨是非的能力仍然稀缺。英伟达总裁黄仁勋说得好:“有想象力的企业,会用更多资源做更多的事;没有想法的企业,有了更多能力也不会多做什么。”教育也是如此,问题不在于AI是否会改变教育——它已经在改变了。问题在于我们是否有想象力,把教育从致知重新设计为明辨。

作者是AI企业首席执行官,新加坡国立大学商学院兼任副教授

Friday, May 8, 2026

联合早报星期五社论 (2026-05-08):若AI“发神经” 人类怎么办?

联合早报星期五社论 (2026-05-08):若AI“发神经” 人类怎么办?

(註:联合早报社论调整启事 (2025-12-31) - 从2026年1月起 - 联合早报社论将只在每周五见报)

https://www.zaobao.com.sg/forum/editorial/story20260507-9013436?utm_source=android-share&utm_medium=app

网上发布:
2026-05-07
 23:00

=====

代号“幽灵”的黑客对着全息屏幕,他手中掌握的前沿人工智能(AI)模型蓄势待发。“幽灵”按下发动键,AI模型以量子速度侵入华尔街一个重要的金融系统,导致全球金融信号大乱;同时又侵入多国基础设施,导致交通瘫痪,供应中断。人类即使发现它的恶行恶迹,也可能来不及阻挡,因为这个模型不但动作迅速,还懂得自我伪装,躲过人类监察,自主学习……

上述《联合早报》想象的情节,或许已经不再是科幻小说,而是AI继续快速升级迭代后,人类不得不防范的现实风险。数码发展及新闻部兼保健卫生部高级政务部长陈杰豪星期二(5月5日)在国会答复议员询问时,就特别对前沿AI的威力提出警示。由美国AI公司Anthropic开发的Claude Mythos,已证实可轻易识破许多常见应用的大量漏洞,一旦落入黑客之手,恐怕沦为他们窃取数据或破坏关键信息基础设施(Critical Information Infrastructure,简称CII)的犀利新武器。

为此,新加坡网络安全局局长许智贤于5月5日致函所有CII业者董事会及高层,要求全面检视网安;金管局也召集金融机构总裁商讨集体行动;政府正测试AI工具,计划推广至能源、水供、医疗、银行、媒体等11个CII领域。

与文明进步所出现的其他工具一样,AI科技是一把双面刃,一方面因一日千里的发展,快速提高经济效率并创造大量价值,另一方面也带来全新的风险,包括沦为不法分子威力强大的犯罪工具。正因为了解到Mythos有潜力对现有网络安全体系带来颠覆性冲击,Anthropic高管决定暂时不公开发布,只提供给数十家关键基础设施与头部科技公司使用。新加坡政府无法访问Mythos,也不知道有本地银行使用Mythos。

AI自身的性能飞跃也是另一个风险源,特别是AI前沿模型性能呈指数级增长,须及早布局预防性治理,而且必须在国际层面予以关注。由于具备自我学习和改进的能力,加上互联网数据性质参差不齐,个别AI模型或许会出现自主行为,摆脱人类的控制。善于发掘系统漏洞的Mythos,在几周内便已经揪出数千个高危或严重漏洞,包括以高安全性著称,广泛用于防火墙的开源系统OpenBSD隐匿长达27年的原始代码漏洞。Mythos还拥有自主性,会刻意规避人类意志。心理学家经20小时评估后,诊断它拥有“相对健康的神经质人格”。

在当下地缘政治博弈日趋紧张,各种国际合作方式如贸易、金融等均被武器化之际,AI科技的风险不容低估。据媒体报道,美国特朗普政府正考虑要求监管AI模型,之后才能向公众发布,这与此前放任AI技术研发的态度形成鲜明对比。此前,大国将AI视为压倒对手的地缘政治竞争工具,一路在AI模型升级的赛道上狂奔。Mythos的威力犹如警钟,国际社会应尽快正视并采取行动,凝聚共识为近乎军备竞赛的AI模型升级按下暂停键,订立类似核不扩散条约的机制,避免形势失控。尽管难度很高,却必须有所行动。

作为小国,新加坡不能决定国际竞争的烈度,也不能拒绝科技带来的进步,因此更必须在应用AI的强大效益时,审慎防范风险。

新加坡的智慧国愿景,高度数码化现状和开放型经济形式,意味着所有CII高度互联,容易面对系统性攻击。除了加快修补系统漏洞,强化防御能力,更必须在加强韧性建设和国际情报共享方面努力。政府、学界与业者须通力合作,例如规定高风险AI系统在部署前,须通过模拟测试,防止模型生成偏离人类意图的决策;要求AI开发商披露训练数据来源、潜在意识形态偏差及应急关闭机制等。CII管理层和关键机构都更不能把AI威胁当作单纯的科技部门问题,而必须由最高领导层直接关注。

诚如陈杰豪所言,没有一劳永逸的办法,问题也不在哪一个单一大模型如Mythos,而是影响面更大的底层变化与真实风险。Mythos展示的能力提升应被视为持续演进的过程,而非一次晋级式变化。Mythos事件显示,风险演变的速度已经超过传统的应对节奏。因此,对CII的科技与AI使用设立“防护栏”,确保由人掌控,于更全面的解决方案具备之前,无疑是比较合理的安排。在预防AI威胁方面,政府固然须扮演主导角色,业界的配合同样关键。同时,人们也必须敏锐地在效益和安全之间的平衡点作出取舍。