For subscribers
Translated by ChatGPT
https://www.zaobao.com.sg/forum/views/story20260514-9048988?utm_source=android-share&utm_medium=app
2026-05-14
Lianhe Zaobao
Author: Ding Bo
The author is the chief technology officer of a local technology company
========
At the beginning of 2024, a finance employee at a multinational corporation in Hong Kong attended a video conference. The “chief financial officer” on screen looked serious and requested an urgent fund transfer, while other executives voiced their agreement one after another. Following instructions, the employee completed multiple remittances over several days, and around HK$200 million (approximately S$33 million) disappeared. It was later discovered that, apart from the employee, there was not a single real person in that meeting — every face was an AI-generated deepfake image.
This is not a science fiction plot, but a case publicly confirmed by the Hong Kong police.
It leaves behind a question that no one has yet been able to answer clearly: who should compensate for the money lost? The AI tool developer? The fraudsters? Or the employee who trusted what he saw with his own eyes? If the same thing happened in Singapore, could our laws provide justice for the victims? The answer is unsettling.
Perhaps you feel that deepfake scams are too far removed from your own life. But AI is already affecting the fate of every ordinary person in more routine and more hidden ways: a bank’s AI system rejects your loan application without telling you why; an AI-assisted medical diagnosis recommends the wrong treatment plan, delaying your condition; an AI screening algorithm on a recruitment platform quietly filters out your résumé, and you never even know you were considered. All these scenarios are possible.
Singapore already has some soft guidelines on AI governance. The AI governance framework issued by the Infocomm Media Development Authority, the Fairness, Ethics, Accountability and Transparency (FEAT) principles for the financial sector introduced by the Monetary Authority of Singapore, and the nationwide AI literacy initiatives promoted under the National AI Strategy 2.0 — these have already accumulated much commendable experience in guiding corporate self-regulation.
But soft law can guide; it cannot provide a safety net. When you walk into a courtroom, the reality remains severe: Singapore currently has no legal provisions specifically targeting AI-related harm. According to a 2025 report by the international legal ratings institution Chambers and Partners, Singapore courts have yet to see any publicly known claims arising from malfunctioning AI systems. One of Singapore’s most notable and explicit recent legislative actions concerning AI relates to regulating election deepfakes and AI-manipulated content, rather than a comprehensive law specifically protecting ordinary citizens from AI-related harm.
A Lawsuit Destined to Be Unfair
The situation ordinary people face when confronting AI-related harm resembles a lawsuit destined to be unfair, for three reasons.
First, algorithms are black boxes. Imagine this scenario: your housing loan application is rejected by a bank, and the only reply you receive is “the system’s comprehensive assessment did not meet requirements.” But what was assessed, how the score was determined, and whether you can appeal — all remain mysteries. In a 2025 academic paper, law professor Chen Guoyao of Singapore Management University pointed out: “How AI models learn, adapt, and generate recommendations and outputs may not be fully transparent and intuitive to users. Therefore, the opacity of AI makes it difficult for hospitals and doctors to evaluate and predict how medical AI will respond in specific medical situations.” When the decision-making process of AI is opaque, and the risks of harm are unforeseeable to anyone, the basis for accountability fundamentally disappears.
Second, no one claims responsibility. From creation to deployment, an AI system involves data collectors, algorithm trainers, system integrators, and end users. When something goes wrong, every link in the chain can shift responsibility to the next, forming a chain of responsibility with no endpoint — and what you ultimately pursue is often nothing more than air.
Third, the European Union once proposed the AI Liability Directive draft, intended to reduce victims’ burden of proof through evidence disclosure obligations and limited presumptions of causation. However, the proposal was withdrawn in 2025, and the EU currently has no unified presumption system for AI-related tort claims. Singapore mainly relies on traditional tort law and product liability frameworks to handle AI-related harm, and has yet to establish a dedicated mechanism to ease the burden of proof for AI cases. Victims therefore still face high barriers in proving defects and causation.
Major jurisdictions around the world have realized that AI accountability can no longer remain at the level of “after-the-fact remedies.” After years of effort, the European Union established the world’s first comprehensive AI regulatory law, the AI Act. Its core logic is simple and powerful: the more an AI application affects the fate of ordinary people, the stricter the requirements should be. Any system classified as “high-risk AI” (such as in recruitment, credit, healthcare, and similar fields) must preserve complete decision records, undergo human review, and provide explanations to affected individuals.
In the United Kingdom, the autonomous driving sector established an insurance-centered liability mechanism: when an accident occurs while an autonomous driving system is operating, victims are generally compensated directly by insurers first, after which the insurance system seeks recovery from the responsible parties. This reduces the burden on victims to prove technical liability.
The Wuhan Autonomous Vehicle Incident Shocked the World
On the evening of March 31 this year, an incident occurred on the streets of Wuhan that shocked the global autonomous driving industry. Around 200 “Apollo Go” taxis operated by Baidu almost simultaneously came to a stop, causing multiple rear-end collisions and trapping passengers inside vehicles for up to two hours. The subsequent investigation pointed to an utterly ordinary cause: a system command issued by engineers to “stop and collect data” was pushed to all vehicles without sufficient verification.
At the end of April, China announced a suspension on issuing new autonomous vehicle permits and required eight leading companies to conduct “comprehensive self-inspections.” The real issue exposed by the incident was this: when AI system failures are “collective and instantaneous,” traditional after-the-fact accountability frameworks are fundamentally inadequate. China chose to freeze risks first through the administrative measure of “suspending permits,” but this is not a long-term solution. The real answer must still return to the legal level.
Singapore’s autonomous driving deployment has entered a new stage. The autonomous shuttle service in Punggol officially opened public trial rides on April 1. Among around 740 early participants, 99% indicated they would recommend the service to others. Every trial participant has, in fact, already become an “indirect party” to this legislative process.
On May 4, the Ministry of Transport launched a legislative consultation on autonomous vehicles, with the goal of submitting legislation to Parliament in 2027. For the first time, the consultation paper systematically clarified the responsibilities of four key actors in the autonomous driving ecosystem — autonomous vehicle technology responsible entities, fleet operators, onboard safety officers, and remote supervisors. It also covered vehicle approvals, licensing systems, penalties for serious violations, and liability rules during testing and commercial operations.
Even more noteworthy is a groundbreaking proposal: in autonomous vehicle accidents, technology responsible entities should bear “advance compensation” responsibility — insurers would first fully compensate victims, and then seek recovery from the truly at-fault parties. This echoes the United Kingdom’s “pay first, recover later” mechanism, shifting the burden of proof away from ordinary victims and onto the parties with the greatest access to technical information. International experience has repeatedly validated this direction.
As a technology practitioner who has long focused on AI governance and accountability issues, the author cannot remain silent at such a legislative window. On May 11, the author formally submitted written feedback through the autonomous vehicle legislative consultation platform, proposing five concrete recommendations: including upstream AI foundation model suppliers as regulated entities (to address risks like those seen in the Wuhan incident, where engineers’ commands were directly transmitted to all vehicles), establishing mandatory “decision log standards,” and creating “collective emergency response obligations” for simultaneous multi-vehicle failures.
The significance of this consultation goes far beyond autonomous driving. It is setting a template for how Singapore will handle AI accountability in the future. Three concrete developments are particularly worth anticipating.
The first is mandatory preservation of “decision logs.” The consultation paper has already proposed requiring autonomous vehicle operators to maintain complete accident records. If this principle can extend to other high-risk scenarios such as medical AI, financial AI, and recruitment AI, it could eliminate evidentiary difficulties at their source — without records, there can be no accountability; without accountability, there can be no justice.
The second is ordinary people’s “right to explanation.” Imagine this: an elderly patient receives an AI-assisted diagnosis in a hospital and is told they are “high-risk,” yet no one can explain which data formed the basis of that judgment. The right to explanation means that the elderly patient should have the right to receive an explanation understandable by humans — not “the system indicates,” but “based on which judgments.” This right has long been established in European data protection regulations, and Singapore’s Personal Data Protection Act could also be specifically amended in this direction.
The third is normalizing insurance mechanisms. The requirement in the autonomous driving sector to purchase compulsory third-party liability insurance is based on the logic that “when proof is difficult, victims should not walk away empty-handed.” Some may say this increases startup costs, but I would rather regard it as an investment in social trust. Compared with the cost of public trust collapsing because of AI-related harm, a mandatory entry threshold is actually the most effective protection for the entire industry.
The reason Wuhan’s autonomous vehicles could come to a halt without spiraling completely out of control is that they were still confined within the single domain of autonomous driving. But if AI systems in healthcare, finance, or law were to collectively malfunction, the consequences could be far more irreversible than a traffic jam.
A city can move very fast, but it must know how to brake. When AI makes mistakes, it is often not an accidental failure of an individual system, but an instantaneous breakdown at the system level. Today, Singapore has seized a window of opportunity to write “how to brake” into law. Establishing accountability mechanisms for AI mistakes is what allows innovation to go further and more steadily — and that is what a truly smart nation should look like.
The author is the chief technology officer of a local technology company

No comments:
Post a Comment