Implications of Rachel Reeves’s Mais Lecture for Science & Innovation Policy

There will be a general election in the UK this year, and it is not impossible (to say the least) that the Labour opposition will form the next government. What might such a government’s policies imply for science and innovation policy? There are some important clues in a recent, lengthy speech – the 2024 Mais Lecture – given by the Shadow Chancellor of the Exchequer, Rachel Reeves, in which she sets out her economic priors.

In the speech, Reeves sets out in her view, the underlying problems of the UK economy – slow productivity growth leading to wage stagnation, low investment levels, poor skills (especially intermediate and technical) and “vast regional disparities, with all of England’s biggest cities outside London having productivity levels below the national average”. I think this analysis is now approaching being a consensus view – see, for example, this recent publication – The Productivity Agenda – from The Productivity Institute.

Interestingly, Reeves resists the temptation to blame everything on the current government, stressing that this situation reflects long-standing weaknesses, which began in the early 1990’s, which were not sufficiently challenged by the Labour governments of the late 90’s and 00’s, and then were made much worse in the 2010’s by Austerity, Brexit, and post-pandemic policy instability. Singling out Conservative Chancellor of the Exchequer Nigel Lawson as the author of policies that were both wrong in principle and badly executed, she identifies this period as the root of “an unprecedented surge in inequality between places and people which endures today. The decline or disappearance of whole industries, leaving enduring social and economic costs and hollowing out our industrial strength. And – crucially – diminishing returns for growth and productivity.”

To add to our problems, Reeves stresses that the external environment the UK now faces is much more challenging than in previous decades, with geopolitical instability reviving the basic question of national security, uncertainties from new technologies like AI, and the challenges of climate instability and the net zero energy transition. She is blunt in saying “globalisation, as we once knew it, is dead”“a growth model reliant on geopolitical stability is a growth model resting on increasingly shallow foundations.”

What comes next? For Reeves, the new questions are “how Britain can pay its way in the world; of our productive capacity; of how to drive innovation and diffusion throughout our economy; of the regional distribution of work and opportunity; of how to mobilise investment, develop skills and tackle inefficiencies to modernise a sclerotic economy; and of energy security”, and the answers are to be found what economist Dani Rodrik calls “productivism”.

In practise, this means an industrial strategy which, recognising the limits of central government’s information and capacity to act, works in partnership. This needs to have both a sector focus – building on the UK’s existing areas of comparative advantage and its strategic needs – and a regional focus, working with local and regional government to support the development of clusters and the realisation of agglomeration benefits.

In terms of the mechanics of the approach, Reeves anticipates that this central mission of government – restoring economic growth – will be driven from the Treasury, through a a beefed up “Enterprise and Growth” unit. To realise these ambitions, she identifies three areas of focus – recreating macroeconomic stability, investment – particularly in partnership with the private sector, and reform – of the planning system, housing, skills, the labour market and regional governance.

Innovation is a central part of Reeves’s vision for increased investment, partly through the familiar call for more capital to flow to university spin-outs. But there is also a call for more focus on the diffusion of new technologies across the whole economy, including what Reeves has long called the “everyday economy”. In my view, this is correct, but will need new institutions, or the adaptation of existing ones (as I argued, with Eoin O’Sullivan: “What’s missing in the UK’s R&D landscape – institutions to build innovation capacity”). There is a very sensible commitment to a ten year funding cycle for R&D institutions, essential not least because some confidence in the longevity of programmes is essential to give the private sector the confidence to co-invest.

This was quite a dense speech, and the commentary around it – including the pre-briefing from Labour – was particularly misleading. I think it would be a mistake to underestimate how much of a break it represents from the conventional economic wisdom of the past three decades, though the details of the policy programme remain to be filled in, and, as many have commented, its implementation in a very tough fiscal environment is going to be challenging. Our current R&D landscape isn’t ideally configured to support these aspirations and the UK’s current challenges (as I argue in my long piece “Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape”); I’d anticipate some reshaping to support the “missions” that are intended to give some structure to the Labour programme. And, as Reeves says unequivocally, of these missions, the goal of restoring productivity and economic growth is foundational.

Optical fibres and the paradox of innovation

Here is one of the foundational papers for the modern world – in effect, reporting the invention of optical fibres. Without optical fibres, there would be no internet, no on-demand video – and no globalisation, in the form we know it, with the highly dispersed supply chains that cheap and reliable information transmission between nations and continents that optical fibres make possible. This won a Nobel Prize for Charles Kao, a HK Chinese scientist then working in STL in Essex, a now defunct corporate laboratory.

Optical fibres are made of glass – so, ultimately, they come from sand – as Ed Conway’s excellent recent book, “Material World” explains. To make optical fibres a practical proposition needed lots of materials science to make glass pure enough to be transparent over huge distances. Much of this was done by Corning in the USA.

Who benefitted from optical fibres? The value of optical fibres to the world economy isn’t fully captured by their monetary value. Like all manufactured goods, productivity gains have driven their price down to almost negligible levels.

At the moment, the whole world is being wired with optical fibres, connecting people, offices, factories to superfast broadband. Yet, the the world trade in optical fibres is worth just $11 bn, less than 0.05% of total world trade. This is characteristic of that most misunderstood phenomenon in economics, Baumol’s so-called “cost disease”.

New inventions successively transform the economy, while innovation makes their price fall so far that, ultimately, in money terms they are barely detectable in GDP figures. Nonetheless,society benefits from innovations, taken for granted through ubiquity & low cost. (An earlier blog post of mine illustrates how Baumol’s “cost disease” works through a toy model)

To have continued economic growth, we need to have repeated cycles of invention & innovation like this. 30 years ago, corporate labs like STL were the driving force behind innovations like these. What happened to them?

Standard Telecommunication Laboratories in Harlow was the corporate lab of STC, Standard Telephones and Cables, a subsidiary of ITT, with a long history of innovation in electronics, telephony, radio coms & TV broadcasting in the UK. After a brief period of independence from 1982, STC was bought by Nortel, Canadian descendent of the North American Bell System. Nortel needed a massive restructuring after late 90’s internet bubble, & went bankrupt in 2009. The STL labs were demolished & are now a business park

The demise of Standard Communication Laboratories just one example of the slow death of UK corporate laboratories through the 90’s & 00’s, driven by changing norms in corporate governance and growing short-termism. These were well described in the 2012 Kay review of UK Equity Markets and Long-Term Decision Making. This has led, in my opinion, to a huge weakening of the UK’s innovation capacity, whose economic effects are now becoming apparent.

Science and Innovation in the 2023 Autumn Statement

On the 22nd November, the Government published its Autumn Statement. This piece, published in Research Professional under the title Economic clouds cast gloom over the UK’s ambitions for R&D, offers my somewhat gloomy perspective on the implications of the statement for science and innovation.

This government has always placed a strong rhetorical emphasis on the centrality of science and innovation in its plans for the nation, though with three different Prime Ministers, there’ve been some changes in emphasis.

This continues in the Autumn Statement: a whole section is devoted to “Supporting the UK’s scientists and innovators”, building on the March 2023 publication of a “UK Science and Technology Framework”, which recommitted to increasing total public spending on research to £20 billion in FY 2024/25. But before going into detail on the new science-related announcements in the Autumn Statement, let’s step back to look at the wider economic context in which innovation strategy is being made.

There are two giant clouds in the economic backdrop the Autumn Statement. One is inflation; the other is economic growth – or, to be more precise, the lack of it.

Inflation, in some senses, is good for governments. It allows them to raise taxes without the need for embarrassing announcements, as people’s cost-of-living wage rises take them into higher tax brackets. And by simply failing to raise budgets in line with inflation, public spending cuts can be imposed by default. But if it’s good for governments, it’s bad for politicians, because people notice rising prices, and they don’t like it. And the real effect of stealth public spending cuts do, nonetheless, materialise.

The effect of the inflation we’ve seen since 2021 is a rise in price levels of around 20%; while the inflation rate peak has surely passed, prices will continue to rise. We can already see the effect on the science budget. Back in 2021, the Comprehensive Spending Review announced a significant increase in the overall government research budget, from £15 billion to £20 billion in 24/25. By next year, though, the effect of inflation will have been to erode that increase in real terms, from £5 billion to less than £2 billion in 2021 money. The effect on Core Research is even more dramatic; in effect inflation will have almost totally wiped out the increase promised in 2021.

Our other problem is persistent slow economic growth, as I discussed here. The underlying cause of this is the dramatic decrease in productivity growth since the financial crisis of 2008. The consequence is the prospect of two full decades without any real growth in wages, and, for the government, the need to simultaneously increase the tax burden and squeeze public services in an attempt to stabilise public debt.

The detailed causes of the productivity slowdown are much debated, but the root of it seems to be the UK’s persistent lack of investment, both public and private (see The Productivity Agenda for a broad discussion). Relatively low levels of R&D are part of this. The most significant policy change in the Autumn Statement does recognise this – it is a tax break allowing companies to set the full cost of new plant and machinery against corporation tax. On the government side, though, the plans are essentially for overall flat capital spending – i.e., taking into account inflation, a real terms cut. Government R&D spending falls in this overall envelope, so is likely to be under pressure.

Instead, the government is putting their hopes on the private sector stepping up to fill the gap, with a continuing emphasis on measures such as R&D tax credits to incentivise private sector R&D, and reforms to the pension system – including the “Long-term Investment for Technology and Science (LIFTS)” initiative – to bring more private money into the research system. The ambition for the UK to be a “Science Superpower” remains, but the government would prefer not to have to pay for it.

One significant set of announcements – on the “Advanced Manufacturing Plan” – marks the next phase in the Conservatives’ off-again, on-again relationship with industrial strategy. Commitments to support advanced manufacturing sectors such as aerospace, automobiles and pharmaceuticals, as well as the “Made Smarter” programme for innovation diffusion, are very welcome. The sums themselves perhaps shouldn’t be taken too seriously; the current government can’t bind its successor, whatever its colour, and anyway this money will have to be found within the overall spending envelope produced by the next Comprehensive Spending Review. But it is very welcome that, after the split-up of the Department of Business, Energy and Industrial Strategy, that the successor Department of Business and International Trade still maintains an interest in research and innovation in support of mainstream business sectors, rather than assuming that is all now to be left to its sister Department of Science, Innovation and Technology.

For all the efforts to create a tax-cutting headline, the economic backdrop for this Autumn statement is truly grim. There is no rosy scenario for the research community to benefit from; the question we face instead is how to fulfil the promises we have been making that R&D can indeed lead to productivity growth and economic benefit.

Productivity and artificial intelligence

To scientists, machine learning is a relatively old technology. The last decade has seen considerable progress, both as a result of new techniques – back propagation & deep learning, and the transformers algorithm – and massive investment of private sector resources, especially computing power. The result has been the striking and hugely publicised success of large language models.

But this rapid progress poses a paradox – for all the technical advances over the last decade, the impact on productivity growth has been undetectable. The productivity stagnation that has been such a feature of the last decade and a half continues, with all the deleterious effects that produces in flat-lining living standards and challenging public finances. The situation is reminiscent of an earlier, 1987, comment by the economist Robert Solow: “You can see the computer age everywhere but in the productivity statistics.”

There are two possible resolutions of this new Solow paradox – one optimistic, one pessimistic. The pessimist’s view is that, in terms of innovation, the low-hanging fruit has already been taken. In this perspective – most famously stated by Robert Gordon – today’s innovations are actually less economically significant than innovations of previous eras. Compared to electricity, Fordist manufacturing systems, mass personal mobility, antibiotics, and telecoms, to give just a few examples, even artificial intelligence is only of second order significance.

To add further to the pessimism, there is a growing sense that the process of innovation itself is suffering from diminishing returns – in the words of a famous recent paper: “Are ideas getting harder to find?”.

The optimistic view, by contrast, is that the productivity gains will come, but they will take time. History tells us that economies need time to adapt to new general purpose technologies – infrastructures & business models need to be adapted, and the skills to use them need to be spread through the working population. This was the experience with the introduction of electricity to industrial processes – factories had been configured around the need to transmit mechanical power from central steam engines through elaborate systems of belts and pulleys to the individual machines, so it took time to introduce systems where each machine had its own electric motor, and the period of adaptation might even involve a temporary reduction in productivity. Hence, one might expect a new technology to follow a J-shaped curve.

Whether one is an optimist or a pessimist, there are a number of common research questions that the rise of artificial intelligence raises:

  • Are we measuring productivity right? How do we measure value in a world of fast moving technologies?
  • How do firms of different sizes adapt to new technologies like AI?
  • How important – and how rate-limiting – is the development of new business models in reaping the benefits of AI?
  • How do we drive productivity improvements in the public sector?
  • What will be the role of AI in health and social care?
  • How do national economies make system-wide transitions? When economies need to make simultaneous transitions – for example net zero and digitalisation – how do they interact?
  • What institutions are needed to support the faster and wider diffusion of new technologies like AI, & the development of the skills needed to implement them?
  • Given the UK’s economic imbalances, how can regional innovation systems be developed to increase absorptive capacity for new technologies like AI?

A finer-grained analysis of the origins of our productivity slowdown actually deepens the new Solow paradox. It turns out that the productivity slowdown has been most marked in the most tech-intensive sectors. In the UK, the most careful decomposition similarly finds that it’s the sectors normally thought of as most tech intensive that have contributed to the slowdown – transport equipment (i.e., automobiles and aerospace), pharmaceuticals, computer software and telecoms.

It’s worth looking in more detail at the case of pharmaceuticals to see how the promise of AI might play out. The decline in productivity of the pharmaceutical industry follows several decades in which, globally, the productivity of R&D – expressed as the number of new drugs brought to market per $billion of R&D – has been falling exponentially.

There’s no clearer signal of the promise of AI in the life sciences than the effective solution of one of the most important fundamental problems in biology – the protein folding problem – by Deepmind’s programme AlphaFold. Many proteins fold into a unique three dimensional structure, whose precise details determine its function – for example in catalysing chemical reactions. This three-dimensional structure is determined by the (one-dimensional) sequence of different amino acids along the protein chain. Given the sequence, can one predict the structure? This problem had resisted theoretical solution for decades, but AlphaFold, using deep learning to establish the correlations between sequence and many experimentally determined structures, can now predict unknown structures from sequence data with great accuracy and reliability.

Given this success in an important problem from biology, it’s natural to ask whether AI can be used to speed up the process of developing new drugs – and not surprising that this has prompted a rush of money from venture capitalists. One of the most high profile start-ups in the UK pursuing this is BenevolentAI, floated on the Amsterdam Euronext market in 2021 with €1.5 billion valuation.

Earlier this year, it was reported that BenevolentAI was laying off 180 staff after one of its drug candidates failed in phase 2 clinical trials. Its share price has plunged, and its market cap now stands at €90 million. I’ve no reason to think that BenevolentAI is anything but a well run company employing many excellent scientists, and I hope it recovers from these setbacks. But what lessons can be learnt from this disappointment? Given that AlphaFold was so successful, why has it been harder than expected to use AI to boost R&D productivity in the pharma industry?

Two factors made the success of AlphaFold possible. Firstly, the problem it was trying to solve was very well defined – given a certain linear sequence of amino acids, what is the three dimensional structure of the folded protein? Secondly, it had a huge corpus of well-curated public domain data to work on, in the form of experimentally determined protein structures, generated through decades of work in academia using x-ray diffraction and other techniques.

What’s been the problem in pharma? AI has been valuable in generating new drug candidates – for example, by identifying molecules that will fit into particular parts of a target protein molecule. But, according to pharma analyst Jack Scannell [1], it isn’t identifying candidate molecules that is the rate limiting step in drug development. Instead, the problem is the lack of screening techniques and disease models that have good predictive power.

The lesson here, then, is that AI is very good at the solving the problems that it is well adapted for – well posed problems, where there exist big and well-curated datasets that span the problem space. Its contribution to overall productivity growth, though, will depend on whether those AI-susceptible parts of the overall problem are in fact the rate-limiting steps.

So how is the situation changed by the massive impact of large language models? This new technology – “generative pre-trained transformers” – consists of text prediction models based on establishing statistical relationships between the words found in a massively multi-parameter regression over a very large corpus of text [3]. This has, in effect, automated the production of plausible, though derivative and not wholly reliable, prose.

Naturally, sectors for which this is the stock-in-trade feel threatened by this development. What’s absolutely clear is that this technology has essentially solved the problem of machine translation; it also raises some fascinating fundamental issues about the deep structure of language.

What areas of economic life will be most affected by large language models? It’s already clear that these tools can significantly speed up writing computer code. Any sector in which it is necessary to generate boiler-plate prose, in marketing, routine legal services, and management consultancy is likely to be affected. Similarly, the assimilation of large documents will be assisted by the capabilities of LLMs to provide synopses of complex texts.

What does the future hold? There is a very interesting discussion to be had, at the intersection of technology, biology and eschatology, about the prospects for “artificial general intelligence”, but I’m not going to take that on here, so I will focus on the near term.

We can expect further improvements in large language models. There will undoubtedly be improvements in efficiencies as techniques are refined and the fundamental understanding of how they work is improved. We’ll see more specialised training sets, that might improve the (currently somewhat shaky) reliability of the outputs.

There is one issue that might prove limiting. The rapid improvement we’ve seen in the performance of large language models has been driven by exponential increases in the amount of computer resource used to train the models, with empirical scaling laws emerging to allow extrapolations. The cost of training these models is now measured in $100 millions – with associated energy consumption starting to be a significant contribution to global carbon emissions. So it’s important to understand the extent to which the cost of computer resources will be a limiting factor on the further development of this technology.

As I’ve discussed before, the exponential increases in computer power given to us by Moore’s law, and the corresponding decreases in cost, began to slow in the mid-2000’s. A recent comprehensive study of the cost of computing by Diane Coyle and Lucy Hampton puts this in context [2]. This is summarised in the figure below:

The cost of computing with time. The solid lines represent best fits to a very extensive data set collected by Diane Coyle and Lucy Hampton; the figure is taken from their paper [2]; the annotations are my own.

The highly specialised integrated circuits that are used in huge numbers to train LLMs – such as the H100 graphics processing units designed by NVIdia and manufactured by TSMC that are the mainstay of the AI industry – are in a regime where performance improvements come less from the increasing transistor densities that gave us the golden age of Moore’s law, and more from incremental improvements in task-specific architecture design, together with simply multiplying the number of units.

For more than two millennia, human cultures in both east and west have used capabilities in language as a signal for wider abilities. So it’s not surprising that large language models have seized the imagination. But it’s important not to mistake the map for the territory.

Language and text are hugely important for how we organise and collaborate to collectively achieve common goals, and for the way we preserve, transmit and build on the sum of human knowledge and culture. So we shouldn’t underestimate the power of tools which facilitate that. But equally, many of the constraints we face require direct engagement with the physical world – whether that is through the need to get the better understanding of biology that will allow us to develop new medicines more effectively, or the ability to generate abundant zero carbon energy. This is where those other areas of machine learning – pattern recognition, finding relationships within large data sets – may have a bigger contribution.

Fluency with the written word is an important skill in itself, so the improvements in productivity that will come from the new technology of large language models will arise in places where speed in generating and assimilating prose are the rate limiting step in the process of producing economic value. For machine learning and artificial intelligence more widely, the rate at which productivity growth will be boosted will depend, not just on developments in the technology itself, but on the rate at which other technologies and other business processes are adapted to take advantage of AI.

I don’t think we can expect large language models, or AI in general, to be a magic bullet to instantly solve our productivity malaise. It’s a powerful new technology, but as for all new technologies, we have to find the places in our economic system where they can add the most value, and the system itself will take time to adapt, to take advantage of the possibilities the new technologies offer.

These notes are based on an informal talk I gave on behalf of the Productivity Institute. It benefitted a lot from discussions with Bart van Ark. The opinions, though, are entirely my own and I wouldn’t necessarily expect him to agree with me.

[1] J.W. Scannell, Eroom’s Law and the decline in the productivity of biopharmaceutical R&D,
in Artificial Intelligence in Science Challenges, Opportunities and the Future of Research.

[2] Diane Coyle & Lucy Hampton, Twenty-first century progress in computing.

[3] For a semi-technical account of how large language models work, I found this piece by Stephen Wolfram very helpful: What is ChatGPT doing … and why does it work?

Should Cambridge double in size?

The UK’s economic geography, outside London, is marked by small, prosperous cities in the south and east, and large, poor cities everywhere else. This leads to a dilemma for policy makers – should we try and make the small, successful, cities, bigger, or do the work needed to make our big cities more successful? The government’s emphasis seems to have swung back to expanding successful places in the South and East, with a particular focus on Cambridge.

Cambridge is undoubtedly a great success story for the UK, and potentially a huge national asset. Decades of investment by the state in research has resulted in an exemplary knowledge-based economy, where that investment in public R&D attracts in private sector R&D in even greater proportion. Cambridge has expanded recently, developing a substantial life science campus around the south of the city, moving engineering and physical sciences research to the West Cambridge site, and developing a cluster of digital businesses around the station. But its growth is constrained by poor infrastructure (water being a particular problem), aesthetic considerations in a historic city centre (which effectively rule out high rise buildings), and the political barriers posed by wealthy and influential communities who oppose growth.

We need an economic reality check too. How much economic difference would it make, on a national scale, if Cambridge did manage to double in size – and what are the alternatives? Here’s a very rough stab at some numbers.

The gross value added per person in Cambridge was £49,000 in 2018, well above the UK average of £29,000 [1]. In Greater Manchester, by contrast, GVA per person was about £25,000, well below the UK average. This illustrates the’s UK unusual and sub-optimal economic geography – in most countries, it’s the big cities that drive the economy. In contrast, in the UK, big second tier cities, like Manchester, Birmingham, Leeds and Glasgow, underperform economically and in effect drag the economy down.

Let’s do the thought experiment where we imagine Cambridge doubles its population, from 126,000 to 252,000, taking those people from Greater Manchester’s population of 2.8 million, and assuming that they are able to add the same average GVA per person to the Cambridge economy. Since the GVA per head in Cambridge is so much higher than in GM, this would raise national GVA by about £3 billion.

In the overall context of the UK’s economy, with a total GVA of £1,900 billion, £3 billion doesn’t make a material difference. The trouble with small cities is that they are small – so, no matter how successful economically they are, even doubling their size doesn’t make much of an impact at a national scale.

As an alternative to doubling the size of Cambridge, we could raise the productivity of Greater Manchester. To achieve a £3 billion increase in GM’s output, we’d need to raise the GVA per person by just over 4.2%, to a bit more than £26,000 – still below the UK average.

That’s the importance of trying to raise the productivity of big cities – they are big. Relatively marginal improvements in productivity in Greater Manchester, Leeds, Birmingham and the West Midlands, Sheffield, Glasgow and Cardiff could cumulatively start to make a material difference to the economy on a national scale. And we know where those improvements need to be made – for example in better public transport, more R&D and support for innovative businesses, providing the skills that innovative businesses need, by addressing poor housing and public health.

I do think Cambridge should be encouraged and supported to expand, to accommodate the private sector businesses that want to take advantage of the public investment in R&D that’s happened there, and to give the people they need to work for them somewhere affordable to live.

But, as Tom Forth and I have argued in detail elsewhere, we need more centres of R&D and innovation outside the Greater Southeast, particularly in those places where the private sector already makes big investments in R&D that aren’t supported by the public sector. The government has already made a commitment, in the Levelling Up White Paper, to increase public investment in R&D outside the Greater Southeast by a third by 2025. That commitment needs to be delivered, and built on by the next government.

Finally, we should ask ourselves whether we are fully exploiting the great assets that have been built in Cambridge, not just to support the economy of a small city in East Anglia, but to drive the economy of the whole nation. How could we make sure that if a Cambridge semiconductor spin-out is expanding, it builds its factory in Newport, Gwent, rather than Saxony or Hsinchu? How can we use the huge wealth of experience in the Cambridge venture capital community to support nascent VC sectors in places like Leeds? How could we make sure a Cambridge biotech spin-out does its clinical trials in Greater Manchester [2], and then then manufactures its medicine in Cheshire or on Merseyside?

Two things are needed to make this happen. Firstly, we need place-based industrial strategies to build the innovation, skills and manufacturing capacity in relevant sectors in other parts of the UK, so these places have the absorptive capacity to make the most of innovations emerging from Cambridge. Then, we need to build institutional links between the key organisations in Cambridge and those in other emerging regional centres. In this way, we could take full advantage of Cambridge’s position as a unique national asset.

[1]. Data here is taken from the ONS’s Regional Gross Value Added (balanced) dataset and mid-year population estimates, in both cases using 2018 data. The data for local authority areas on a workplace basis, but populations are for residents. This probably flatters the productivity number for Cambridge, as it doesn’t take account of people who live in neighbouring areas and commute into the city.

At another limit, one could ask what would happen if you doubled the population of the whole county of Cambridgeshire, 650,000. As the GVA per head at the county level is £31.5k, quite a lot less than the figure for Cambridge city, this makes surprisingly little difference to the overall result – this would increase GVA by £3.15 bn, the same as a 4.2% increase in GM’s productivity.

Of course, this poses another question – why the prosperity of Cambridge city doesn’t spill over very far into the rest of the county. Anyone who regularly uses the train from Cambridge via Ely and March to Peterborough might have a theory about that.

[2]. The recent government report on commercial clinical trials in the UK, by Lord O’Shaughnessy, highlighted a drop in patients enrolled in commercial clinical trials in the UK of 36% over the last six years. This national trend has been bucked in Greater Manchester, where there has been an increase of 19% in patient recruitment, driven by effective partnership between the NIHR Greater Manchester Clinical Research Network, the GM devolved health and social care system, industry and academia.

The UK’s crisis of economic growth

Everyone now agrees that the UK has a serious problem of economic growth – or lack of it – even if opinions differ about its causes, and what we should do about it. Here I’d like to set out the scale of the problem with plots of the key data.

My first plot shows real GDP since 1955. The break in the curve at the global financial crisis around 2007 is obvious. Before 2007 there were booms and busts – but the whole curve is well fit by a trend line representing 2.4% a year real growth. But after the 2008 recession, there was no return to the trend line. Growth was further interrupted by the covid pandemic, and the recovery from the pandemic has been slow. The UK’s GDP is now about 18% lower than it would have been if the economy had returned to its pre-recession trend line.


UK real GDP. Chained volume measure, base year 2019. ONS: 30 June 2023 release.

Total GDP is of particular interest to HM Treasury, as it is the overall size of the economy that determines the sustainability of the national debt. But you can grow an economy by increasing the size of the population, and, from the point of view of the sustainability of public services and a wider sense of prosperity, GDP per capita is a better measure.

My second plot shows real GDP capita. GDP per person has risen less fast than total GDP, both before and after the global financial crisis, reflecting the fact that the UK’s population has been growing. Trend growth before the break was 2.1% per annum; once again, contrary to all previous experience in the post-war period, per capita GDP growth has never recovered to the pre-crisis trend line. The gap with the previous trend, 25%, or £10,900 per person, is perhaps the best measure of the UK’s lost prosperity.


UK real GDP per capita. Chained volume measure, base year 2019. ONS: 12 May 2023 release.

The most fundamental measure of the productive capacity of the economy is, perhaps, labour productivity, defined as the GDP per hour worked. One can make GDP per capita grow by people working more hours, or by having more people enter the labour market. In the late 2010s, this was a significant factor in the growth of GDP per capita, but since the pandemic this effect has gone into reverse, with more people leaving the labour market, often due to long-term ill-health.

My third plot shows UK labour productivity. This shows the fundamental and obvious break in productivity performance that, in my view, underlies pretty much everything that’s wrong with the UK’s economy – and indeed its politics. As I discussed in more detail in my previous post,“When did the UK’s productivity slowdown begin?”, I increasingly suspect that this break predates the financial crisis – and indeed that crisis is probably better thought as an effect, rather than a cause, of a more fundamental downward shift in the UK’s capacity to generate economic growth.


UK labour productivity, whole economy. Chained volume measure, index (2019=100). ONS: 7 July 2023 release.

Talk of GDP growth and labour productivity may seem remote to many voters, but this economic stagnation has direct effects, not just on the affordability of public services, but on people’s wages. My final plot shows average weekly earnings, corrected for inflation. The picture is dismal – there has essentially been no rise in real wages for more than a decade. This, at root, is why the UK”s lack of economic growth is only going to grow in political salience.


UK Average weekly earnings, 2015 £s, corrected for inflation with CPI. ONS: 11 July 2023 release.

I’ve written a lot about the causes of the productivity slowdown and possible policy options to address it, reflecting my own perspectives on the importance of innovation and on redressing the UK’s regional economic imbalances. Here I just make two points.

On diagnosis, I think it’s really important to note the mid-2000s timing of the break in the productivity curve. Undoubtedly subsequent policy mistakes have made things worse, but I believe a fundamental analysis of the UK’s problems must recognise that the roots of the crisis go back a couple of decades.

On remedies, I think it should be obvious that if we carry on doing the same sorts of things in the same way, we can expect the same results. Token, sub-scale interventions will make no difference without a serious rethinking of the UK’s fundamental economic model.

When did the UK’s productivity slowdown begin?

The UK is now well into a second decade of sluggish productivity growth, with far-reaching consequences for people’s standard of living, for the sustainability of public services, and (arguably) for the wider political environment. It has become usual to date the beginning of this new period of slow productivity growth to the global financial crisis around 2008, but I increasingly suspect that the roots of the malaise were already in place earlier in the 2000s.


UK Labour productivity. Data: ONS, Output per hour worked, chained volume measure, 7 July 2023 release. Fit: non-linear least squares fit to two exponential growth functions, continuous at break point. Best fit break point is 2004.9.

My plot shows the latest release of whole-economy quarterly productivity data from the ONS. I have fitted the data to a function representing two periods of exponential growth, with different time constants, constrained to be continuous at a time of break. There are four fitting parameters in this function – the two time constants, the level at the break point, and the time of break. My best fit shows a break point at 2004.9.


Residuals for the fit to the quarterly productivity data shown above.

The plot of the residuals to the fit is shown above. This shows that the goodness of fit is comparable across the whole time range (with the exception of the spikes representing the effect of the pandemic). There are deviations from the fit corresponding to the effect of booms and recessions, but the deviations around the time of the Global financial crisis are comparable with those in earlier boom/bust cycles.

How sensitive is the fit to the timing of the break point? I’ve redone the fits constraining the year of the break point, and calculated at each point the normalised chi-squares (i.e. the sum of the squared differences between data and model, divided by the number of data points). This is shown below.


Normalised chi-squared – i.e. sum of the squares of the differences between productivity data and the two exponential model, for fits where the time of break is constrained.

The goodness of fit varies smoothly around an optimum value of the time of break near 2005. A time of break at 2008 produces a materially worse quality of fit.

Can we quantify this further and attach a probability distribution to the year of break? I don’t think so using this approach – we have no reason to suppose that the deviations between model and fit are drawn from a Gaussian, which would be the assumption underlying traditional approaches to ascribing confidence limits to the fitting parameters. I believe there are Bayesian approaches to addressing this problem, and I will look into those for further work.

But for now, this leaves us with a hypothesis that the character of the UK economy, and the global context in which it operated, had already made the transition to a low productivity growth state by the mid-2000’s. In this view, the financial crisis was a symptom, not a cause, of the productivity slowdown.

As times change, the UK’s R&D landscape needs to change too

I took part in a panel discussion last Thursday at the Royal Society, about the UK’s R&D landscape. The other panelists were Anna Dickinson from the think tank Onward, and Ben Johnson, Policy Advisor at the Department of Science, Innovation and Technology, and our chair was Athene Donald. This is a much expanded and tidied version of my opening remarks.

What is the optimum shape of the research and development landscape for the UK? The interesting and important questions here are:

  • What kind of R&D is being done?
  • In what kind of institution is R&D being done?
  • What kind of people do R&D?
  • Who sets the priorities?
  • Who pays for it?

I’m a physicist, but I want to start with lessons from history and geography.

If there’s one lesson we should learn from history, it’s that the way things are now, isn’t the way they always have been. And we should be curious about different countries arrange their R&D landscapes – not just in the Anglophone countries and our European partners and neighbours that we are so familiar with, but in the East Asian countries that have been so economically successful recently.

The particular form that a nation’s R&D landscape takes arises from a set of political, economic circumstances, influenced by the outcome of ideological arguments that take place both within the science community and in wider society.

I’ve just read Iwan Rhys Morus’s fascinating and engaging book on 19th century Science and Technology: “How the Victorians took us to the Moon”. It’s fitting that the book begins with a discussion of just such an ideological debate – about the future orientation of the Royal Society after the death of Sir Joseph Banks in 1820, at the end of his autocratic – and aristocratic – 41 year rule over the Society. The R&D landscape that emerged from these struggles was the one appropriate for the United Kingdom in the Victorian era – a nation going through an industrial revolution, and acquiring an world empire. That landscape was dominated by men of science (and they were men), who believed, above all, in the idea of progress. They valued self-discipline, self-confidence, precision and systematic thinking, while sharing assumptions about gender, class and race that would no longer be acceptable in today’s world.

Morus argues that many of the attitudes, assumptions and institutions of the science community that led to the great technological advances of the 20th century were laid down in the Victorian period. As someone who received their training in one of those great Victorian institutions – Cambridge’s Cavendish Laboratory, that rings true to me. I vividly remember as a graduate student that the great physicist Sir Sam Edwards had a habit of dismissing some rival theorist with the words “it was all done by Lord Rayleigh”. Lots of it probably was.

But also, learning how to do science in the mid-1980’s, I was just at the tail end of another era – what David Edgerton calls the Warfare State. The UK was a nation in which science had been subservient to the defence needs of two world wars, and a Cold War in which technology was the front line. The state ran a huge defence research establishment, and an associated nuclear complex where the lines between civil nuclear power and the nuclear weapons programme were blurred. This was a corporatist world, in which the boundaries between big, national companies like GEC and ICI and the state were themselves not clear. And there was a lot of R&D being done – in 1980, the UK was one of the most R&D intensive countries in the world.

We live in a very different world now. R&D in the private sector still dominates, but now pretty much half of it is done in the labs of overseas owned multinationals. In a world in which R&D is truly globalised, it doesn’t make a lot of sense to talk about UK plc. There’s much more emphasis on the role of spin-outs and start-ups – venture capital supported companies based on protected intellectual property. This too is globalised – we agonise about how few of these companies, even when they are successful, stay to scale up in the UK rather than moving to Germany or the USA. The big corporate laboratories of the past, where use-inspired basic research co-existed with more applied work, are a shadow of their former selves or gone entirely, eroded by a new focus on shareholder value.

Meanwhile, we have seen UK governments systematically withdraw support from applied research, as Jon Agar’s work has documented. After a couple of decades in which university research had been squeezed, the 2000’s saw a significant increase in support through the research councils, but this came at the cost of continual erosion of public sector research establishments. This has left the research councils in a much more dominant position in the government funding landscape – the fraction of government R&D funding allocated through research councils has increased from about 12% in the mid-1980’s to around 30% now. But the biggest rise in government support for R&D has come through the non-specific subsidy for private sector – the R&D tax credit – whose cost rose from just over £1 billion in 2010 to more than £7 billion in 2019.

These dramatic changes in the R&D landscape that have unfolded between the 1980s and now should be understood in the context of the wider changes in the UK’s political economy over that period, often characterised as the dominance of neoliberalism and globalisation. There has been an insistence on the primacy of market mechanisms, and the full integration of the UK in a global free-trading environment, together with a rejection of any idea of state planning or industrial strategy. The shape of the UK economy changed very dramatically, with a dramatic shrinking of the manufacturing sector, the exacerbation of regional economic imbalances, and a persistent trade deficit with the rest of the world. The rise and fall of North Sea Oil and the development of a bubble in financial services has contributed to these trends.

The world looks very different now. The pandemic taught us that global supply chains can be very fragile in a crisis, while the Ukraine war reminded us that state security still, ultimately, depends on high technology and productive capacity. The slower crisis of climate change continues – we face a wrenching economic transition to move our energy economy to a zero carbon basis, while the already emerging effects of climate disruption will be challenging. In the UK, we have a failing economy, where productivity growth has flat-lined since 2008; the consequences are that wages have stagnated to a degree unprecedented in living memory and public services have deteriorated to politically unacceptable levels.

In place of globalisation, we see a retreat to trading blocks. Industrial strategy has returned to the USA at scale $50 bn from the CHIPS act to rebuild its semiconductor industry, and $370 bn for a green transition. The EU is responding. Of course in East Asia and China industrial strategy never went away.

So the question we face now is whether our R&D landscape is the right one for the times we live in now? I don’t think so. Of course, our values are different from those both of the Victorians and mid-20th century technocrats, and our circumstances are different too. Many of the assumptions of the post-1980’s political settlement are now in question. So how must the landscape evolve?

The new R&D landscape needs to more focused on the pressing problems we face: the net zero transition, the productivity slowdown, poor health outcomes, the security of the state. Here in the Royal Society, I don’t need to make the case for the importance of basic science, exploratory research, the unfettered inquiries of our most creative scientists. But in addition , we need more applied R&D, it needs to be more geographically dispersed, more inclusive. It has to build on the existing strengths of the country – but by itself that is not enough, and we will have to rebuild some of the innovation and manufacturing capacity that we have lost. And I think this manufacturing and innovation capacity is important for basic science too, because it’s this technological capacity that allows us to implement and benefit from the basic science. For example, one can be excited by the opportunities of quantum computing, but to make it work it’s probably going to rely on manufacturing technologies already implemented for semiconductors.

The national R&D landscape we have has evolved as the material conditions and ideological assumptions of the nation have changed, and as those conditions and assumptions change, so must the national R&D landscape change in response.

What the UK should – and should not – do about semiconductors

What should be in the long-delayed UK Semiconductor Strategy? My previous series of three blogposts set out the global context, the UK’s position in the global semiconductor world, some thoughts on the future directions of the industry, and some of the options open to the UK. Here, in summary, is a list of actions I think the UK should – and should not – take.

1. The UK should… (& there’s no excuse not to)

The UK government has committed to spending £700m on an exascale computer. It should specify that processor design should be from a UK design house. After decades of talking about using government procurement to drive innovation, the UK government should give it a try.

Why?
The UK has real competitive strength in processor design, and this sub-sector will become more and more important. AI demands exponentially more computing power, but the end of Moore’s law limits supply of computing power from hardware improvements, so design optimisation for applications like AI becomes more important than ever.

2. The UK should… (though it probably won’t, as it would be expensive, difficult, & ideologically uncomfortable)

The UK government should buy ARM outright from its current owner, SoftBank, and float it on the London Stock Exchange, while retaining a golden share to prevent a subsequent takeover by an overseas company.

Why?
ARM is the only UK-based company with internationally significant scale & reach into global semiconductor ecosystem. It’s the sole anchor company for the UK semiconductor industry. Ownership & control matters; ARM’s current overseas ownership makes it vulnerable to takeover & expatriation.

Why not?
It would cost >£50 bn upfront. Most of this money would be recovered in a subsequent sale, and the government might even make a profit, but some money would be at risk. But it’s worth comparing this with the precedent of the post GFC bank nationalisations, at a similar scale.

3. The UK should not… (& almost certainly not possible in any case)

The UK should not attempt to create a UK based manufacturing capability in leading edge logic chips. This would need to be done by one of the 3 international companies with the necessary technical expertise – TSMC, Intel or Samsung.

Why not?
A single leading edge fab costs >£10’s billions. The UK market isn’t anywhere near big enough to be attractive by itself, and the UK isn’t in a position to compete with the USA & Europe in a $bn’s subsidy race.

Moreover, decades of neglect of semiconductor manufacturing probably means the UK doesn’t, in any case, have the skills to operate a leading edge fab.

4. The UK should not…

The UK should not attempt to create UK based manufacturing capability in legacy logic chips, which are still crucial for industrial, automotive & defence applications. The lesser technical demands of these older technologies mean this would be more feasible than manufacturing leading edge chips.

Why not?
Manufacturing legacy chips is very capital intensive, and new entrants have to compete, in a brutally cyclical world market, with existing plants whose capital costs have already been depreciated. Instead, the UK needs to work with like-minded countries (especially in Europe) to develop secure supply chains.

5. Warrants another look

The UK could secure a position in some niche areas (e.g. compound semiconductors for power electronics, photonics and optoelectronics, printable electronics). Targeted support for R&D, innovation & skills, & seed & scale-up finance could yield regionally significant economic benefits.

6. How did we end up here, and what lessons should we learn?

The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

The UK can’t afford to make the same mistakes with future technology waves. We need a properly resourced, industrial strategy applied consistently over decades, growing & supporting UK owned, controlled & domiciled innovative-intensive firms at scale.

What should the UK do about semiconductors? (PDF version)

In anticipation of the UK government’s promised semiconductor strategy, my last three posts have summarised the global state of the industry, the UK’s position in that industry, and suggested what, realistically, the UK’s options are for a semiconductor strategy.

Here are links to all three parts, and for convenience a PDF version of the whole piece.

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry
Part 3: towards a UK Semiconductor Strategy.

PDF version (1 MB):
What should the UK do about semiconductors?