By Lenny Mendonca and Martin Neil Baily*
What is the biggest risk posed by AI? While many would point to the financial system, our attention would be better directed more toward labour markets.
Financial concerns are certainly understandable. Even in 2026, the specter of 2008 haunts every conversation about economic risk. When Lehman Brothers collapsed, and the global banking system teetered, governments faced a momentous choice: bail out the banks with public money or watch the financial system implode. In the United States, policymakers chose a bailout, encouraging future risk-taking and enraging taxpayers who bore the cost.
But US regulators then spent the following decade building a new line of defense, which is now embedded in the global banking architecture. In the process, they offered a roadmap for addressing any systemic risks now accumulating within the AI industry.
To be sure, the Financial Stability Board (FSB) warns that regulatory frameworks designed to monitor AI are still in their early stages. But the risks remain manageable. The AI industry has arrived at a juncture that should look familiar to anyone who remembers the pre‑2008 financial system. Market concentration is extreme, the interconnections between major players are deep, and the industry’s critical infrastructure runs through single points of failure.
Before 2008, risk in the financial system was assumed to be widely distributed. It was not. Leverage was hidden in off-balance-sheet vehicles, counterparty exposures were opaque, and the failure of a single institution could cascade unpredictably through the entire system. Regulation was scattered across numerous entities, none of which had a complete picture of what was happening. Regulators had no framework for thinking about systemic risks, and no way to designate which firms would bring down others if they failed.
The AI industry has a similar concentration problem. According to Menlo Ventures, just three companies—Anthropic, OpenAI, and Google—control roughly 88% of the enterprise large‑language‑model market. And the hardware layer is even more concentrated, with TSMC completely dominating advanced-node semiconductor manufacturing, raising concerns about a potential global compute bottleneck. When a 7.4-magnitude earthquake struck Taiwan in April 2024, it temporarily disrupted semiconductor production and reminded the world how geographically concentrated this infrastructure has become.
Fortunately, the central innovation of post‑2008 financial regulation has proven effective: identify the institutions whose failure would be catastrophic, and mandate that they hold sufficient total loss‑absorbing capacity (TLAC)—equity and long-term debt that can be written down—to fail safely. The results are clear. A Congressional Research Service analysis of US bank failures shows a sharp decline in failures following the post‑crisis regulatory reforms.
Although none of the tools introduced after the financial crisis translates directly to AI (banks hold financial assets that can be valued and stress‑tested, whereas AI systems rely on training data, model weights, and compute capacity), the underlying regulatory logic still applies. Regulators need only consider three adaptations.
The first is systemic designation and disclosure. Regulators and standard setters should identify which AI providers, cloud platforms, and chip manufacturers have become critical infrastructure for the financial system. The FSB’s October 2025 report on AI monitoring acknowledged that financial institutions are increasingly dependent on a small number of major technology providers for AI capabilities, but that monitoring efforts remain at an “early stage,” owing to data gaps and a lack of standardized taxonomies. Fixing that is the first step.
Second, operational resilience requirements should serve as a proxy for capital buffers. Instead of adhering to TLAC capital requirements, systemically important AI providers would have to demonstrate redundancy, failover capacity, and genuine substitutability. Financial firms relying on a single AI provider should face concentration limits analogous to the exposure rules that prevent banks from lending too much to a single counterparty. The FSB’s Third-Party Risk Management and Oversight Toolkit already provides a framework; regulators should use it more aggressively.
Third, we need stress testing for AI-driven correlated failures. The European Systemic Risk Board warns that because AI models are “history-constrained”—trained on past data—they are inherently poor at predicting tail events outside their training distribution. This is precisely the kind of model risk that stress tests are designed to reveal. Regulators should develop AI-specific stress scenarios—the failure of a major cloud provider, a regulatory shock to a dominant model, or a geopolitical disruption to chip supply chains—and require financial institutions using AI in critical functions to demonstrate that they can absorb the shock.
Fortunately, AI-related failures would not necessarily trigger a 2008-style financial crisis. If regulators act with the appropriate urgency, the potential systemic financial risk from AI is manageable.
But what AI may do to people who work for a living is a deeper and far more consequential challenge. The scale of potential labour displacement from AI is no longer hypothetical. IBM has replaced hundreds of people in its HR department, where AI now handles 94% of routine tasks; Salesforce has reduced hiring for engineering and customer service roles; and Shopify’s CEO has said that new hires will not be approved unless hiring managers can demonstrate that AI cannot do the job. The World Economic Forum’s Future of Jobs Report 2025 projects that 39% of workers’ core skills will be disrupted by 2030, and McKinsey & Company estimates that half of today’s work activities could be automated between 2030 and 2060.
What is to be done? Although research from the Harvard Kennedy School suggests that re-training into AI‑exposed occupations can involve substantial earnings penalties, that is no reason to abandon this solution. Other countries, such as Denmark and Singapore, have spent amply on training, and their programs work well.
In any case, getting employers involved is essential because training programs must equip workers with the skills that are in demand now and that will be needed in the future. Ensuring access to high-speed broadband and digital-literacy training is crucial.
Regulators have the tools to prevent an AI-driven financial crisis. But we still need policymakers to get serious about ensuring that the AI revolution works for everyone, not just the few who own the underlying technologies.
* Lenny Mendonca, former chief economic and business adviser to Governor Gavin Newsom of California, is Senior Partner Emeritus at McKinsey & Company. Martin Neil Baily is Senior Fellow Emeritus in Economic Studies at the Brookings Institution and a former chairman of US President Bill Clinton’s Council of Economic Advisers (1999-2001). Copyright 2026 Project Syndicate. Used with permission.
We welcome your comments below. If you are not already registered, please register to comment
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.