sign up log in
Want to go ad-free? Find out how, here.

Charles Ferguson worries that our policymaking infrastructure is incapable of comprehending the new AI technology

Technology / opinion
Charles Ferguson worries that our policymaking infrastructure is incapable of comprehending the new AI technology
image

There are times when a major global development demands a special response from many academic disciplines, industries, and departments of government. This was the case with World War II, nuclear weapons, and the Cold War, and it is the case again with generative AI.

Yet too often, discussions about AI are overly specialized or siloed between technologists, economists, and other disciplines – from political science, psychology, and sociology to law and military studies. This is a problem because the technologists are certainly right that AI will change everything, fast, and that the conventional policy world isn’t keeping up. But just as war is too important to be left to the generals, AI is too important to be controlled solely by those inventing it, no matter how brilliant they are.

Most AI technologists and entrepreneurs are wildly optimistic. They anticipate revolutionary advances in medicine, the elimination of hard physical labor, radically accelerated productivity growth, and universal abundance. They expect such outcomes partly because there is money to be made, but also because their belief in the technology’s potential is sincere.

But sincerity often accompanies naivete, as I know all too well. Thirty years ago, I founded the startup that developed the first software tool enabling anyone to build a website, and I totally drank the Kool-Aid. We told ourselves that our product would allow truth-tellers and innovators to bypass gatekeepers, liberating and enlightening everyone. Social networks would, of course, do the same, and together we would create a decentralized, egalitarian paradise of unfiltered truth. How wrong we were.

When I look at the AI landscape, heavily populated by extremely young founders, I see the same naivete. I recently spoke with a brilliant young CEO whose AI startup is already valued at several billion dollars. When asked whether the problem of AI deepfakes and disinformation worried him, he replied (to paraphrase): Of course not. All you need to do is verify that something comes from a trustworthy source. Easy.

Really? How will these trustworthy sources know what is real when someone sends them a photograph, document, audio recording, or video? What will they do when thousands of images or videos come in, each contradicting the others? How will we know whether something posted on social media is real? How can news sources remain current and profitable if they must laboriously verify the reality of absolutely everything?

Still, if the technologists are overly optimistic, the economists suffer from a different sort of tunnel vision. They tend to see everything as a smooth equilibrium of self-adjusting markets. They predict substantial but gradual productivity improvements, dismissing extreme scenarios and neglecting both radical opportunities and potentially grave problems alike.

“Calm down, we’re the adults in the room,” economists intone. In fact, contemporary economics, obsessed with its models, has too often been wrong, divorced from reality, or even compromised by corruption.

Consider Larry Summers, who recently turned into a pariah over his correspondence with the convicted sex offender Jeffrey Epstein. The outcry against him was certainly justified, but he deserved exile much earlier for a career’s worth of disastrous economic policies that devastated the lives of millions.

Recall Summers’ leading role in the deregulation of finance while at the Treasury department during President Bill Clinton’s administration. Even when confronted in the late 1990s with the Asian financial crisis and the dot-com bubble, Summers, along with Robert Rubin, who preceded him as Treasury secretary, and Federal Reserve Chair Alan Greenspan, gleefully pushed through the repeal of the Glass-Steagall Act (which separated investment banking from retail banking). They also banned the regulation of derivatives, which would become a major cause of the 2008 financial crisis.

Later, while serving in the Obama administration, Summers advocated bailing out the banks without insisting on any penalties or prosecution of bankers, despite clear evidence of massive fraud. He then made millions giving speeches to banks and banking conferences. But more to the point, Summers wasn’t exceptional. Mainstream economics has an appalling track record, having told us that globalization would lift all boats, that industrial policy never works, that deregulation could not cause a financial crisis, that development economics would solve Africa’s problems, and that we need not worry about monopolies.

Then there is the discipline’s corruption problem. Many prominent economists’ incomes during these years were dominated by corporate payments. In 2004, Goldman Sachs persuaded Glenn Hubbard, then the dean of Columbia Business School, to co-author an article with William Dudley, then Goldman’s chief economist, arguing that unregulated derivatives made the financial system safer. Four years later, the 2008 crisis revealed those derivatives to be extremely dangerous. The next year, Dudley became president of the Federal Reserve Bank of New York. Rarely has failing upward been so stark.

Of course, there are exceptions. Among technologists, Anthropic’s CEO Dario Amodei has been notably perceptive and honest both about AI’s opportunities and its dangers. In economics, the Nobel laureate Simon Johnson wrote perhaps the single best article about how the US financial industry captured federal policy and caused the 2008 crisis. But the overall record of the economics discipline does not inspire confidence, and now too many economists (most of whom know very little about AI) seem to be underestimating the technology – both its potential benefits and dangers.

My sense is that political science, psychology, law, education, sociology, and military studies get a better score. They tend to put reality before models, and they consider issues that both the technologists and economists often dismiss. But they, too, are embedded in an institutional matrix – of universities, think tanks, and government agencies – that is no longer fit for purpose. To govern AI, we need cooperation across all these disciplines, and we need it fast.

I find it striking that many of the best AI founders I meet are graduate or even undergraduate dropouts (in fact, the $200,000 Thiel Fellowship requires recipients not to have a university degree). By contrast, conventional policymaking (in both the US and Europe) relies on a staid, bureaucratic system whose creaking machinery is no match for AI.

Of course, we should not eliminate universities, think tanks, or government policymaking. But extraordinary circumstances demand extraordinary responses. For many policy issues, slow and conventional is probably okay. Not for AI.

Charles Ferguson is an angel investor, a limited partner in six AI venture capital funds, and a nonexclusive partner in Davidovs Venture Collective. His direct investment positions include three technology incumbents (Apple, Microsoft, and Nvidia) and many AI startups, including Perplexity, Etched, CopilotKit, Paradigm, Browser Use, FuseAI, and Pally.


Charles Ferguson is a technology investor, policy analyst, and the director of many documentary films, including the Oscar-winning Inside Job. Copyright: Project Syndicate, 2026. It is here with permission.

We welcome your comments below. If you are not already registered, please register to comment

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.