*Anthropic’s Mythos is a warning shot. Singapore’s banking system needs to be ready*
For subscribers
https://www.straitstimes.com/opinion/anthropics-mythos-is-a-warning-shot-singapores-banking-system-needs-to-be-ready
2026-04-21
By--- Lin William Cong is President’s Chair Professor of Finance, Computing and Data Science at Nanyang Technological University, Singapore, where he serves as the associate dean of Nanyang Business School and is also the founding director of the Global Institute for Finance, Technology, and Society.
=====
When the US Treasury Secretary and the chair of the Federal Reserve convene an unscheduled meeting with Wall Street’s most senior executives, markets pay attention.
And when the catalyst is not a liquidity crisis or a sovereign default, but the capabilities of an artificial intelligence model that its own maker considers too dangerous to release publicly, the rest of the world’s financial centres should pay attention too.
On April 15, the Cyber Security Agency of Singapore issued an advisory to local organisations, urging them to strengthen their cybersecurity measures and patch critical vulnerabilities.
The model in question is Claude Mythos Preview, announced by Anthropic in early April. The company says Mythos has discovered vulnerabilities in major browsers and operating systems, including weaknesses in foundational digital infrastructure. Rather than release the model broadly, Anthropic is reportedly offering it first to major technology and infrastructure firms so they can patch their systems before adversaries acquire similar capabilities.
Reasonable people can debate whether Anthropic is overstating what Mythos can do. The company plainly has incentives to dramatise its own products. But for policymakers, the key issue is not whether every claim about this model is fully proven, but that the possibility was taken seriously by government officials and major financial institutions.
This tells us something important: frontier AI is no longer just a story about productivity tools or consumer applications. It is becoming a question of critical infrastructure, cyber resilience and, potentially, financial stability.
A different class of threat
As a major financial hub and a regional base for global banks, Singapore needs to act early as it would not be insulated from a serious AI-driven cyber incident affecting international finance.
If more powerful AI tools make it easier to find software weaknesses, automate attacks or exploit common digital systems used by many organisations, the effects will not stop at banks or regulators. They could reach the public in ordinary but increasingly costly ways.
In Singapore, phishing scams involving fake DBS and POSB e-mails were reported in 2026, with at least 72 cases and losses of some $484,000. Already, scams led to $913 million in losses in Singapore in 2025. AI could make such attacks even more convincing, allowing criminals to mimic bank alerts, tailor scam messages and imitate the authorities with far greater realism.
In a more serious scenario, a cyberattack on shared digital infrastructure could delay digital payments or disrupt access to banking services. Trust in finance is built in everyday transactions such as when a person expects a salary to arrive on time, a card payment to go through, or a banking app to open safely.
To its credit, the Monetary Authority of Singapore (MAS) has been among the more forward-looking regulators on AI governance. It has introduced frameworks to guide the responsible use of AI in finance, including the FEAT principles on fairness, ethics, accountability and transparency, and the Veritas initiative, which helps financial institutions test and assess their AI systems.
Recent efforts like Project MindForge show that Singapore is also beginning to grapple with newer and more complex AI risks, so the nation is not starting from scratch. But the Mythos episode suggests that the next gap may lie elsewhere.
Much of the existing policy framework, in Singapore and globally, has focused on how financial institutions use AI internally: model risk, fairness, explainability, and accountability. Those remain important concerns. Yet different threat vectors are now emerging: increasingly capable AI systems or AI agents developed outside the traditional financial sector, but potentially deployable against it.
Banks and regulators already invest heavily in cybersecurity, but much of their defensive architecture has been built around known vulnerabilities, known signatures and adversaries operating within relatively familiar bounds.
An AI system that can autonomously discover previously unknown weaknesses in widely used software represents a more demanding class of threat, especially in a financial system built on shared cloud, software and communications infrastructure.
More On This Topic
White House and Anthropic CEO discuss working together amid rising fear about Mythos model
IMF chief warns global monetary system not ready for AI cyberthreats
The challenge becomes sharper as finance itself becomes more automated. Stablecoins, tokenised assets, digital payment rails and software-mediated financial intermediation are expanding the role of code, automation and machine-speed execution.
As autonomous AI agents increasingly participate in trading, treasury operations and on-chain finance, the speed of both innovation and disruption rises, while advances in quantum computing could over time threaten the cryptography that underpins digital finance.
In such an environment, a vulnerability may not remain an isolated technical flaw. It can become a system-level event. That is why the next stage of financial governance cannot rely only on more rules or better compliance. It also requires better ways to test what could happen before a real crisis occurs.
Beyond a siloed strategy
This is where what I call economic world models come in. These are simulation tools that go beyond testing a single bank’s defences. They model how markets, institutions and people actually behave – how a shock at one firm spreads to others, how customers react when a payment app goes down, how attackers and defenders change tactics as incentives shift.
Think of it as a flight simulator for the financial system: a safe environment to rehearse crises before they happen. This matters because financial shocks do not unfold like a machine part snapping without warning. They spread more like panic in a crowd, through watching, reacting and adjusting, and conventional cyber testing was not designed to capture that.
Such tools have already been developed in prototypes at Nanyang Technological University, and Singapore is well placed to develop them further.
A practical next step would be for MAS and its partners to use market-scale and agent-based simulations for risk monitoring and stress tests that go beyond today’s cyber exercises, which focus mainly on whether a single firm can recover from a defined attack.
The bigger question now is how disruption would ripple through payment rails, settlement systems such as MEPS+ and FAST, and the many regional banks and corporates that route transactions through Singapore.
That matters because Singapore is not just another domestic market. It is a regional treasury, payments and clearing hub.
MAS has described it as one of the world’s top offshore renminbi centres, and DBS joined ICBC Singapore as an RMB clearing bank in December 2025. A serious disruption here could therefore spread well beyond Singapore into the wider region’s trade and settlement flows.
AI-driven shocks will not stop at borders and Singapore is in an ideal position to convene an open, cross-border simulation platform, bringing together banks, regulators, researchers and technology providers across the region to share scenarios and stress-test them together.
In an AI era, watching for system-wide risks can no longer be siloed within each country.
Even then, Singapore should build its own AI capability in this space rather than rely entirely on foreign-built systems.
More On This Topic
With AI, cyberattacks come fast; it’s time firms patch faster
Cyberdefenders urged to use AI to counter AI threats
Local universities and research institutes already have strong foundations in AI and financial modelling. Multilingual AI models, scenario sandboxes and digital twins of the financial system should become part of the country’s core governance infrastructure, as essential as its physical infrastructure.
None of this requires accepting the most alarmist reading of what Mythos can do today. Healthy scepticism is entirely appropriate. But prudent governance does not wait for the worst case to be conclusively established. It responds when the direction of risk becomes clear.
AI capability is beginning to intersect with financial infrastructure in ways that may be faster, more adaptive and harder to contain than before.
For Singapore, the question is not only how banks should use AI responsibly, but how the country should prepare for a world in which more powerful AI may be used to test, probe or disrupt the systems that people rely on every day.
That may sound abstract until something goes wrong. Then it becomes concrete very quickly. It could be a salary that does not arrive on time, a transfer that cannot be made, a bank account that has been compromised, or a customer who no longer trusts what appears on their mobile screen.
In that sense, preparing for AI-related financial risk is not just a technical exercise or a regulatory concern. It is part of protecting the reliability on which modern economic life depends.
Lin William Cong is President’s Chair Professor of Finance, Computing and Data Science at Nanyang Technological University, Singapore, where he serves as the associate dean of Nanyang Business School and is also the founding director of the Global Institute for Finance, Technology, and Society.