Eighteen months ago I wrote about the potential economic effects of artificial intelligence – drawing attention to a “new Solow paradox”, in which rapid technological progress in machine learning and artificial intelligence has coexisted with continuing stagnation in productivity. What has happened since then?
Technical progress has continued, driven by massive investments, both in the companies developing the technologies themselves and in the huge server farms that are needed to provide the computing power to train and implement the models. Interestingly, there has been a significant upturn in productivity in the USA since 2023 (analysed in this recent Resolution Foundation report, PDF). Some of this is due to the USA’s increasing production of oil and gas, and some from the tech sector itself. But there has been a significant growth in productivity in those service sectors that use, rather than develop, technology. It’s at least plausible that some of this is being driven by the adoption of AI. Perhaps we’re seeing the beginnings of a resolution of the new Solow paradox.
What’s going to happen next? To try and clarify some of the widely divergent assumptions, it’s perhaps helpful to sketch out some scenarios. My primary purpose here isn’t to try and guess the most likely future outcome – personally, I’m deeply uncertain as to what’s going to happen. Instead, I think it’s interesting enough to ask what assumptions various actors are operating under, and how those assumptions themselves constrain and influence the future.
1. Intelligence explosion
In this scenario, two dynamics lead to the transformation of economy and society through AI. The first is a process of recursive self-improvement, by which the application of AI technologies to develop the AI methods themselves leads to a runaway process in the growth of the power and effectiveness of those methods. The second is an increasing application of AI to the physical world, leading to rapid technological progress in all fields. The outcome is a winner takes all economy, in which the controllers of the new technologies enjoy unprecedented political and economic power.
One of the most compelling early use cases of LLMs has been to write code, so it’s a natural extension to think that AI systems can be used to do systematic computer experiments in order to find an optimise algorithms for machine learning – it seems a fair assumption that this is already happening, contributing to the progress we’re seeing in the development of LLMs and reasoning systems.
But to make the hoped for transformational impact on the economy and society, artificial intelligence needs to have a more direct interaction with the physical world than simply through existing corpuses of text. This needs the incorporation of real time data of all kinds, together with improvements in robotics to intervene in the world. Self-driving cars are the prototype system here, but if AI is going to accelerate technological progress itself we need to move to self-driving laboratories.
Fully automated scientific discovery leads to rapid progress in medicine, but the biggest impact comes in developing the hardware infrastructure of computing itself. The planar CMOS integrated circuits that computing depends on now are replaced by new 3d assemblies of nanoscale electronic components, brought together in systems of ungraspable complexity. And so increasing computer power in turn feeds the acceleration of this intelligence explosion.
What does the economy look like in this scenario? It’s winner-take-all – the firm, organisation or nation which achieves this goal first accumulates an unprecedented degree of economic and political power. On the other hand, technological progress becomes so rapid that there’s a hope that everyone benefits
Who believes in this scenario? My impression is that this is a relatively conservative version of the Silicon Valley consensus, summarised in the near-universal SV opinion that artificial general intelligence (a term usually left poorly defined) is imminent. If this is what one believes, then the logical course of action is devote all possible resources to achieving this goal as soon as possible. It’s obviously a matter of self-interest to be one of the controllers of such an all-powerful technology.
But altruists can also reassure themselves that entirely focusing on this goal is the most effective way of solving any global problem. The corollary is that normal approaches to scientific and technological progress will soon become obsolete, so the existing scientific enterprise becomes less and less relevant and doesn’t need to be sustained.
2. Excel in prose
In this scenario, the development of large language models has essentially solved the problem of automating verbal reasoning, in the same way that spreadsheets automated arithmetic and bookkeeping. As happened with spreadsheets before them, this leads to the quiet transformation of most business processes. Productivity growth recovers, perhaps even doubling, to return to levels seen in the 1990’s. Beyond LLMs, the application of machine learning and artificial intelligence to the physical world continues to make incremental progress, though technological progress in the physical world remains markedly slower than in the digital world.
The first killer application for LLMs was machine translation, now a solved problem. Many of the problems of information flow in big organisations are susceptible to automation by LLMs – producing meeting notes, summarising documents, generating routine communications. As in previous technologies, the key factor limiting the speed of uptake is the need to adapt existing processes to create places where LLMs can contribute, but the number of compelling use cases steadily expands.
In software engineering, LLM assistants dramatically speed up the process of writing code, leaving more time for higher order tasks such as designing system architectures. The effectiveness and efficiency of LLMs themselves is significantly improved; a focus emerges on fine-tuning LLMs on custom datasets to improve their reliability and accuracy. More generally, LLMs enable natural language interfaces to computer systems of all kinds, potentially reducing the barriers to their widespread adoption.
On the other hand, early enthusiasm for artificial intelligence as a transformational technology in areas such as biotechnology and healthcare leads to some disappointment, despite early successes like AlphaFold. It turns out that the limiting factors here are fundamental shortcomings in our understanding of how biology works, and while machine learning and laboratory automation provides useful new tools, LLMs, as a technology fundamentally based on manipulating language, provide no dramatic shortcuts to developing scientific understanding of the natural world.
What are the economic implications of this scenario? One should expect significant productivity gains, but with a delay as business processes have to be adapted to make the most of the new technology. As with any new technologies, many of the early movers go bankrupt, but the firms that survive and make the most money from the technology are those that successfully integrate the technology into wider suites of business oriented software.
Who believes in this scenario? I think this is close to a consensus view of those economists and policy makers not directly involved in the AI business. For example, a recent US National Academies Consensus Report Artificial Intelligence and the Future of Work, from a blue-ribbon committee of economists and computer scientists, co-chaired by Erik Brynjolfsson and Tom Mitchell, identifies AI as a general purpose technology with significant potential to drive productivity improvements. However, it observes that “achieving the full benefits of AI will likely require complementary investments in new skills and new organizational processes and structures.”
In this scenario, the most effective focus will probably be on technology diffusion and skills development. Technological advances in other areas will depend on continued research and development spending, with the existing scientific enterprise benefitting incrementally from machine learning techniques.
3. Crash and burn
History has featured a number of financial bubbles, in which asset prices rise beyond any seeming connection to their underlying value. Many of these are related to technological advances, in which the advent of new technologies encourages an irrational exuberance based on overoptimism, both on the speed with which the new technologies will have an impact, and on the ultimate scale of that impact. The classic recent example is the dot-com bubble of the late nineties, while one can go back in history to episodes such as the Railway Mania of the 1840s in the UK.
In this scenario the current enthusiasm for AI is revealed as one such bubble – in scale, one of the biggest in history. The bursting of that bubble exposes a scale of over-investment so large as to risk the stability of the whole financial system, while the technology itself disappoints. Ultimately, rationality returns, and the technology finds useful applications. As in the case of the dot-com bubble, the capital infrastructure installed may ultimately yield useful returns, though not to the original investors.
The initial danger signals are continuing technical difficulties limiting the reliability of large language models, and fundamental issues in the business models underpinning the very large investments being made in the computing infrastructure to support LLMs. Larger models, trained on more data, still suffer from “hallucinations” – factual statements stated with great confidence and plausibility that turn out to be incorrect. The perception that LLMs are demonstrating any kind of real intelligence turns out to be largely a case of anthropomorphism. In fact, what LLMs tell us is not how intelligent and original computers have become, but how unoriginal and derivative most human interactions are. Rather than being “stochastic parrots”, LLMs have turned out to be automated “catechisms of cliché”.
Meanwhile, for those use cases that do turn out to have some value, a panoply of rival models – many open source – destroys the pricing power of the tech giants. It turns out that there is no “moat”, no way of achieving and protecting the monopoly position that Silicon Valley tech firms aspire to. Lacking the financial returns that would justify the huge cost of building out the infrastructure for artificial intelligence, those investments are written off and valuations of the tech companies – especially the “Magnificent Seven” tech giants, which saw such a huge increase while the enthusiasm for AI persisted, collapse.
To get a sense of the scale of the bubble, the market capitalisation of the Magnificent Seven increased by about $7 trillion between January 2023 and May 2025. Microsoft, Alphabet, Amazon and Meta have reported combined capital expenditure on AI of $246bn in 2024, up from $151bn in 2023.. OpenAI’s “Stargate” project to develop AI computing infrastructure, aims to raise $100bn now, with a total target investment of $500 bn. Substantial additional investments will have come from private equity and venture capital.
The overvaluation of the Magnificent Seven, together with the excess capital investments in the private market, can be thought of as a “bezzle”, in the sense discussed by Michael Pettis here. While the bubble persists, the owners of those overpriced assets are in possession of apparent wealth that doesn’t reflect the real productive capacity of the economy, but does lead to increases in GDP through a number of channels. But the bezzle always reverses, in turn depressing GDP, as the loss of apparent wealth is distributed across the economy. The outcomes include the failure of financial institutions (sometimes subsequently bailed out at the expense of the taxpayer), people across the world losing their savings, and a freezing of investments in other areas of technology as the venture capital industry retrenches.
As the situation stabilises, useful, but not transformative, applications are found for large language models, and the massive installed infrastructure of high performance computing finds new applications in science and engineering.
This is definitely a contrarian scenario – but contrarians are not always wrong. They will be bracing themselves for financial turbulence.
Last words
These are scenarios, not predictions, and it is possible to imagine many other different possibilities. But given the constant tendency to treat technological progress as unfolding along a single fixed track, it’s important to hold on to the fact that the future is open, especially when the range of plausible outcomes seems so large.