Science and Innovation in the 2023 Autumn Statement

On the 22nd November, the Government published its Autumn Statement. This piece, published in Research Professional under the title Economic clouds cast gloom over the UK’s ambitions for R&D, offers my somewhat gloomy perspective on the implications of the statement for science and innovation.

This government has always placed a strong rhetorical emphasis on the centrality of science and innovation in its plans for the nation, though with three different Prime Ministers, there’ve been some changes in emphasis.

This continues in the Autumn Statement: a whole section is devoted to “Supporting the UK’s scientists and innovators”, building on the March 2023 publication of a “UK Science and Technology Framework”, which recommitted to increasing total public spending on research to £20 billion in FY 2024/25. But before going into detail on the new science-related announcements in the Autumn Statement, let’s step back to look at the wider economic context in which innovation strategy is being made.

There are two giant clouds in the economic backdrop the Autumn Statement. One is inflation; the other is economic growth – or, to be more precise, the lack of it.

Inflation, in some senses, is good for governments. It allows them to raise taxes without the need for embarrassing announcements, as people’s cost-of-living wage rises take them into higher tax brackets. And by simply failing to raise budgets in line with inflation, public spending cuts can be imposed by default. But if it’s good for governments, it’s bad for politicians, because people notice rising prices, and they don’t like it. And the real effect of stealth public spending cuts do, nonetheless, materialise.

The effect of the inflation we’ve seen since 2021 is a rise in price levels of around 20%; while the inflation rate peak has surely passed, prices will continue to rise. We can already see the effect on the science budget. Back in 2021, the Comprehensive Spending Review announced a significant increase in the overall government research budget, from £15 billion to £20 billion in 24/25. By next year, though, the effect of inflation will have been to erode that increase in real terms, from £5 billion to less than £2 billion in 2021 money. The effect on Core Research is even more dramatic; in effect inflation will have almost totally wiped out the increase promised in 2021.

Our other problem is persistent slow economic growth, as I discussed here. The underlying cause of this is the dramatic decrease in productivity growth since the financial crisis of 2008. The consequence is the prospect of two full decades without any real growth in wages, and, for the government, the need to simultaneously increase the tax burden and squeeze public services in an attempt to stabilise public debt.

The detailed causes of the productivity slowdown are much debated, but the root of it seems to be the UK’s persistent lack of investment, both public and private (see The Productivity Agenda for a broad discussion). Relatively low levels of R&D are part of this. The most significant policy change in the Autumn Statement does recognise this – it is a tax break allowing companies to set the full cost of new plant and machinery against corporation tax. On the government side, though, the plans are essentially for overall flat capital spending – i.e., taking into account inflation, a real terms cut. Government R&D spending falls in this overall envelope, so is likely to be under pressure.

Instead, the government is putting their hopes on the private sector stepping up to fill the gap, with a continuing emphasis on measures such as R&D tax credits to incentivise private sector R&D, and reforms to the pension system – including the “Long-term Investment for Technology and Science (LIFTS)” initiative – to bring more private money into the research system. The ambition for the UK to be a “Science Superpower” remains, but the government would prefer not to have to pay for it.

One significant set of announcements – on the “Advanced Manufacturing Plan” – marks the next phase in the Conservatives’ off-again, on-again relationship with industrial strategy. Commitments to support advanced manufacturing sectors such as aerospace, automobiles and pharmaceuticals, as well as the “Made Smarter” programme for innovation diffusion, are very welcome. The sums themselves perhaps shouldn’t be taken too seriously; the current government can’t bind its successor, whatever its colour, and anyway this money will have to be found within the overall spending envelope produced by the next Comprehensive Spending Review. But it is very welcome that, after the split-up of the Department of Business, Energy and Industrial Strategy, that the successor Department of Business and International Trade still maintains an interest in research and innovation in support of mainstream business sectors, rather than assuming that is all now to be left to its sister Department of Science, Innovation and Technology.

For all the efforts to create a tax-cutting headline, the economic backdrop for this Autumn statement is truly grim. There is no rosy scenario for the research community to benefit from; the question we face instead is how to fulfil the promises we have been making that R&D can indeed lead to productivity growth and economic benefit.

Productivity and artificial intelligence

To scientists, machine learning is a relatively old technology. The last decade has seen considerable progress, both as a result of new techniques – back propagation & deep learning, and the transformers algorithm – and massive investment of private sector resources, especially computing power. The result has been the striking and hugely publicised success of large language models.

But this rapid progress poses a paradox – for all the technical advances over the last decade, the impact on productivity growth has been undetectable. The productivity stagnation that has been such a feature of the last decade and a half continues, with all the deleterious effects that produces in flat-lining living standards and challenging public finances. The situation is reminiscent of an earlier, 1987, comment by the economist Robert Solow: “You can see the computer age everywhere but in the productivity statistics.”

There are two possible resolutions of this new Solow paradox – one optimistic, one pessimistic. The pessimist’s view is that, in terms of innovation, the low-hanging fruit has already been taken. In this perspective – most famously stated by Robert Gordon – today’s innovations are actually less economically significant than innovations of previous eras. Compared to electricity, Fordist manufacturing systems, mass personal mobility, antibiotics, and telecoms, to give just a few examples, even artificial intelligence is only of second order significance.

To add further to the pessimism, there is a growing sense that the process of innovation itself is suffering from diminishing returns – in the words of a famous recent paper: “Are ideas getting harder to find?”.

The optimistic view, by contrast, is that the productivity gains will come, but they will take time. History tells us that economies need time to adapt to new general purpose technologies – infrastructures & business models need to be adapted, and the skills to use them need to be spread through the working population. This was the experience with the introduction of electricity to industrial processes – factories had been configured around the need to transmit mechanical power from central steam engines through elaborate systems of belts and pulleys to the individual machines, so it took time to introduce systems where each machine had its own electric motor, and the period of adaptation might even involve a temporary reduction in productivity. Hence, one might expect a new technology to follow a J-shaped curve.

Whether one is an optimist or a pessimist, there are a number of common research questions that the rise of artificial intelligence raises:

  • Are we measuring productivity right? How do we measure value in a world of fast moving technologies?
  • How do firms of different sizes adapt to new technologies like AI?
  • How important – and how rate-limiting – is the development of new business models in reaping the benefits of AI?
  • How do we drive productivity improvements in the public sector?
  • What will be the role of AI in health and social care?
  • How do national economies make system-wide transitions? When economies need to make simultaneous transitions – for example net zero and digitalisation – how do they interact?
  • What institutions are needed to support the faster and wider diffusion of new technologies like AI, & the development of the skills needed to implement them?
  • Given the UK’s economic imbalances, how can regional innovation systems be developed to increase absorptive capacity for new technologies like AI?

A finer-grained analysis of the origins of our productivity slowdown actually deepens the new Solow paradox. It turns out that the productivity slowdown has been most marked in the most tech-intensive sectors. In the UK, the most careful decomposition similarly finds that it’s the sectors normally thought of as most tech intensive that have contributed to the slowdown – transport equipment (i.e., automobiles and aerospace), pharmaceuticals, computer software and telecoms.

It’s worth looking in more detail at the case of pharmaceuticals to see how the promise of AI might play out. The decline in productivity of the pharmaceutical industry follows several decades in which, globally, the productivity of R&D – expressed as the number of new drugs brought to market per $billion of R&D – has been falling exponentially.

There’s no clearer signal of the promise of AI in the life sciences than the effective solution of one of the most important fundamental problems in biology – the protein folding problem – by Deepmind’s programme AlphaFold. Many proteins fold into a unique three dimensional structure, whose precise details determine its function – for example in catalysing chemical reactions. This three-dimensional structure is determined by the (one-dimensional) sequence of different amino acids along the protein chain. Given the sequence, can one predict the structure? This problem had resisted theoretical solution for decades, but AlphaFold, using deep learning to establish the correlations between sequence and many experimentally determined structures, can now predict unknown structures from sequence data with great accuracy and reliability.

Given this success in an important problem from biology, it’s natural to ask whether AI can be used to speed up the process of developing new drugs – and not surprising that this has prompted a rush of money from venture capitalists. One of the most high profile start-ups in the UK pursuing this is BenevolentAI, floated on the Amsterdam Euronext market in 2021 with €1.5 billion valuation.

Earlier this year, it was reported that BenevolentAI was laying off 180 staff after one of its drug candidates failed in phase 2 clinical trials. Its share price has plunged, and its market cap now stands at €90 million. I’ve no reason to think that BenevolentAI is anything but a well run company employing many excellent scientists, and I hope it recovers from these setbacks. But what lessons can be learnt from this disappointment? Given that AlphaFold was so successful, why has it been harder than expected to use AI to boost R&D productivity in the pharma industry?

Two factors made the success of AlphaFold possible. Firstly, the problem it was trying to solve was very well defined – given a certain linear sequence of amino acids, what is the three dimensional structure of the folded protein? Secondly, it had a huge corpus of well-curated public domain data to work on, in the form of experimentally determined protein structures, generated through decades of work in academia using x-ray diffraction and other techniques.

What’s been the problem in pharma? AI has been valuable in generating new drug candidates – for example, by identifying molecules that will fit into particular parts of a target protein molecule. But, according to pharma analyst Jack Scannell [1], it isn’t identifying candidate molecules that is the rate limiting step in drug development. Instead, the problem is the lack of screening techniques and disease models that have good predictive power.

The lesson here, then, is that AI is very good at the solving the problems that it is well adapted for – well posed problems, where there exist big and well-curated datasets that span the problem space. Its contribution to overall productivity growth, though, will depend on whether those AI-susceptible parts of the overall problem are in fact the rate-limiting steps.

So how is the situation changed by the massive impact of large language models? This new technology – “generative pre-trained transformers” – consists of text prediction models based on establishing statistical relationships between the words found in a massively multi-parameter regression over a very large corpus of text [3]. This has, in effect, automated the production of plausible, though derivative and not wholly reliable, prose.

Naturally, sectors for which this is the stock-in-trade feel threatened by this development. What’s absolutely clear is that this technology has essentially solved the problem of machine translation; it also raises some fascinating fundamental issues about the deep structure of language.

What areas of economic life will be most affected by large language models? It’s already clear that these tools can significantly speed up writing computer code. Any sector in which it is necessary to generate boiler-plate prose, in marketing, routine legal services, and management consultancy is likely to be affected. Similarly, the assimilation of large documents will be assisted by the capabilities of LLMs to provide synopses of complex texts.

What does the future hold? There is a very interesting discussion to be had, at the intersection of technology, biology and eschatology, about the prospects for “artificial general intelligence”, but I’m not going to take that on here, so I will focus on the near term.

We can expect further improvements in large language models. There will undoubtedly be improvements in efficiencies as techniques are refined and the fundamental understanding of how they work is improved. We’ll see more specialised training sets, that might improve the (currently somewhat shaky) reliability of the outputs.

There is one issue that might prove limiting. The rapid improvement we’ve seen in the performance of large language models has been driven by exponential increases in the amount of computer resource used to train the models, with empirical scaling laws emerging to allow extrapolations. The cost of training these models is now measured in $100 millions – with associated energy consumption starting to be a significant contribution to global carbon emissions. So it’s important to understand the extent to which the cost of computer resources will be a limiting factor on the further development of this technology.

As I’ve discussed before, the exponential increases in computer power given to us by Moore’s law, and the corresponding decreases in cost, began to slow in the mid-2000’s. A recent comprehensive study of the cost of computing by Diane Coyle and Lucy Hampton puts this in context [2]. This is summarised in the figure below:

The cost of computing with time. The solid lines represent best fits to a very extensive data set collected by Diane Coyle and Lucy Hampton; the figure is taken from their paper [2]; the annotations are my own.

The highly specialised integrated circuits that are used in huge numbers to train LLMs – such as the H100 graphics processing units designed by NVIdia and manufactured by TSMC that are the mainstay of the AI industry – are in a regime where performance improvements come less from the increasing transistor densities that gave us the golden age of Moore’s law, and more from incremental improvements in task-specific architecture design, together with simply multiplying the number of units.

For more than two millennia, human cultures in both east and west have used capabilities in language as a signal for wider abilities. So it’s not surprising that large language models have seized the imagination. But it’s important not to mistake the map for the territory.

Language and text are hugely important for how we organise and collaborate to collectively achieve common goals, and for the way we preserve, transmit and build on the sum of human knowledge and culture. So we shouldn’t underestimate the power of tools which facilitate that. But equally, many of the constraints we face require direct engagement with the physical world – whether that is through the need to get the better understanding of biology that will allow us to develop new medicines more effectively, or the ability to generate abundant zero carbon energy. This is where those other areas of machine learning – pattern recognition, finding relationships within large data sets – may have a bigger contribution.

Fluency with the written word is an important skill in itself, so the improvements in productivity that will come from the new technology of large language models will arise in places where speed in generating and assimilating prose are the rate limiting step in the process of producing economic value. For machine learning and artificial intelligence more widely, the rate at which productivity growth will be boosted will depend, not just on developments in the technology itself, but on the rate at which other technologies and other business processes are adapted to take advantage of AI.

I don’t think we can expect large language models, or AI in general, to be a magic bullet to instantly solve our productivity malaise. It’s a powerful new technology, but as for all new technologies, we have to find the places in our economic system where they can add the most value, and the system itself will take time to adapt, to take advantage of the possibilities the new technologies offer.

These notes are based on an informal talk I gave on behalf of the Productivity Institute. It benefitted a lot from discussions with Bart van Ark. The opinions, though, are entirely my own and I wouldn’t necessarily expect him to agree with me.

[1] J.W. Scannell, Eroom’s Law and the decline in the productivity of biopharmaceutical R&D,
in Artificial Intelligence in Science Challenges, Opportunities and the Future of Research.

[2] Diane Coyle & Lucy Hampton, Twenty-first century progress in computing.

[3] For a semi-technical account of how large language models work, I found this piece by Stephen Wolfram very helpful: What is ChatGPT doing … and why does it work?

Should Cambridge double in size?

The UK’s economic geography, outside London, is marked by small, prosperous cities in the south and east, and large, poor cities everywhere else. This leads to a dilemma for policy makers – should we try and make the small, successful, cities, bigger, or do the work needed to make our big cities more successful? The government’s emphasis seems to have swung back to expanding successful places in the South and East, with a particular focus on Cambridge.

Cambridge is undoubtedly a great success story for the UK, and potentially a huge national asset. Decades of investment by the state in research has resulted in an exemplary knowledge-based economy, where that investment in public R&D attracts in private sector R&D in even greater proportion. Cambridge has expanded recently, developing a substantial life science campus around the south of the city, moving engineering and physical sciences research to the West Cambridge site, and developing a cluster of digital businesses around the station. But its growth is constrained by poor infrastructure (water being a particular problem), aesthetic considerations in a historic city centre (which effectively rule out high rise buildings), and the political barriers posed by wealthy and influential communities who oppose growth.

We need an economic reality check too. How much economic difference would it make, on a national scale, if Cambridge did manage to double in size – and what are the alternatives? Here’s a very rough stab at some numbers.

The gross value added per person in Cambridge was £49,000 in 2018, well above the UK average of £29,000 [1]. In Greater Manchester, by contrast, GVA per person was about £25,000, well below the UK average. This illustrates the’s UK unusual and sub-optimal economic geography – in most countries, it’s the big cities that drive the economy. In contrast, in the UK, big second tier cities, like Manchester, Birmingham, Leeds and Glasgow, underperform economically and in effect drag the economy down.

Let’s do the thought experiment where we imagine Cambridge doubles its population, from 126,000 to 252,000, taking those people from Greater Manchester’s population of 2.8 million, and assuming that they are able to add the same average GVA per person to the Cambridge economy. Since the GVA per head in Cambridge is so much higher than in GM, this would raise national GVA by about £3 billion.

In the overall context of the UK’s economy, with a total GVA of £1,900 billion, £3 billion doesn’t make a material difference. The trouble with small cities is that they are small – so, no matter how successful economically they are, even doubling their size doesn’t make much of an impact at a national scale.

As an alternative to doubling the size of Cambridge, we could raise the productivity of Greater Manchester. To achieve a £3 billion increase in GM’s output, we’d need to raise the GVA per person by just over 4.2%, to a bit more than £26,000 – still below the UK average.

That’s the importance of trying to raise the productivity of big cities – they are big. Relatively marginal improvements in productivity in Greater Manchester, Leeds, Birmingham and the West Midlands, Sheffield, Glasgow and Cardiff could cumulatively start to make a material difference to the economy on a national scale. And we know where those improvements need to be made – for example in better public transport, more R&D and support for innovative businesses, providing the skills that innovative businesses need, by addressing poor housing and public health.

I do think Cambridge should be encouraged and supported to expand, to accommodate the private sector businesses that want to take advantage of the public investment in R&D that’s happened there, and to give the people they need to work for them somewhere affordable to live.

But, as Tom Forth and I have argued in detail elsewhere, we need more centres of R&D and innovation outside the Greater Southeast, particularly in those places where the private sector already makes big investments in R&D that aren’t supported by the public sector. The government has already made a commitment, in the Levelling Up White Paper, to increase public investment in R&D outside the Greater Southeast by a third by 2025. That commitment needs to be delivered, and built on by the next government.

Finally, we should ask ourselves whether we are fully exploiting the great assets that have been built in Cambridge, not just to support the economy of a small city in East Anglia, but to drive the economy of the whole nation. How could we make sure that if a Cambridge semiconductor spin-out is expanding, it builds its factory in Newport, Gwent, rather than Saxony or Hsinchu? How can we use the huge wealth of experience in the Cambridge venture capital community to support nascent VC sectors in places like Leeds? How could we make sure a Cambridge biotech spin-out does its clinical trials in Greater Manchester [2], and then then manufactures its medicine in Cheshire or on Merseyside?

Two things are needed to make this happen. Firstly, we need place-based industrial strategies to build the innovation, skills and manufacturing capacity in relevant sectors in other parts of the UK, so these places have the absorptive capacity to make the most of innovations emerging from Cambridge. Then, we need to build institutional links between the key organisations in Cambridge and those in other emerging regional centres. In this way, we could take full advantage of Cambridge’s position as a unique national asset.

[1]. Data here is taken from the ONS’s Regional Gross Value Added (balanced) dataset and mid-year population estimates, in both cases using 2018 data. The data for local authority areas on a workplace basis, but populations are for residents. This probably flatters the productivity number for Cambridge, as it doesn’t take account of people who live in neighbouring areas and commute into the city.

At another limit, one could ask what would happen if you doubled the population of the whole county of Cambridgeshire, 650,000. As the GVA per head at the county level is £31.5k, quite a lot less than the figure for Cambridge city, this makes surprisingly little difference to the overall result – this would increase GVA by £3.15 bn, the same as a 4.2% increase in GM’s productivity.

Of course, this poses another question – why the prosperity of Cambridge city doesn’t spill over very far into the rest of the county. Anyone who regularly uses the train from Cambridge via Ely and March to Peterborough might have a theory about that.

[2]. The recent government report on commercial clinical trials in the UK, by Lord O’Shaughnessy, highlighted a drop in patients enrolled in commercial clinical trials in the UK of 36% over the last six years. This national trend has been bucked in Greater Manchester, where there has been an increase of 19% in patient recruitment, driven by effective partnership between the NIHR Greater Manchester Clinical Research Network, the GM devolved health and social care system, industry and academia.

The UK’s crisis of economic growth

Everyone now agrees that the UK has a serious problem of economic growth – or lack of it – even if opinions differ about its causes, and what we should do about it. Here I’d like to set out the scale of the problem with plots of the key data.

My first plot shows real GDP since 1955. The break in the curve at the global financial crisis around 2007 is obvious. Before 2007 there were booms and busts – but the whole curve is well fit by a trend line representing 2.4% a year real growth. But after the 2008 recession, there was no return to the trend line. Growth was further interrupted by the covid pandemic, and the recovery from the pandemic has been slow. The UK’s GDP is now about 18% lower than it would have been if the economy had returned to its pre-recession trend line.


UK real GDP. Chained volume measure, base year 2019. ONS: 30 June 2023 release.

Total GDP is of particular interest to HM Treasury, as it is the overall size of the economy that determines the sustainability of the national debt. But you can grow an economy by increasing the size of the population, and, from the point of view of the sustainability of public services and a wider sense of prosperity, GDP per capita is a better measure.

My second plot shows real GDP capita. GDP per person has risen less fast than total GDP, both before and after the global financial crisis, reflecting the fact that the UK’s population has been growing. Trend growth before the break was 2.1% per annum; once again, contrary to all previous experience in the post-war period, per capita GDP growth has never recovered to the pre-crisis trend line. The gap with the previous trend, 25%, or £10,900 per person, is perhaps the best measure of the UK’s lost prosperity.


UK real GDP per capita. Chained volume measure, base year 2019. ONS: 12 May 2023 release.

The most fundamental measure of the productive capacity of the economy is, perhaps, labour productivity, defined as the GDP per hour worked. One can make GDP per capita grow by people working more hours, or by having more people enter the labour market. In the late 2010s, this was a significant factor in the growth of GDP per capita, but since the pandemic this effect has gone into reverse, with more people leaving the labour market, often due to long-term ill-health.

My third plot shows UK labour productivity. This shows the fundamental and obvious break in productivity performance that, in my view, underlies pretty much everything that’s wrong with the UK’s economy – and indeed its politics. As I discussed in more detail in my previous post,“When did the UK’s productivity slowdown begin?”, I increasingly suspect that this break predates the financial crisis – and indeed that crisis is probably better thought as an effect, rather than a cause, of a more fundamental downward shift in the UK’s capacity to generate economic growth.


UK labour productivity, whole economy. Chained volume measure, index (2019=100). ONS: 7 July 2023 release.

Talk of GDP growth and labour productivity may seem remote to many voters, but this economic stagnation has direct effects, not just on the affordability of public services, but on people’s wages. My final plot shows average weekly earnings, corrected for inflation. The picture is dismal – there has essentially been no rise in real wages for more than a decade. This, at root, is why the UK”s lack of economic growth is only going to grow in political salience.


UK Average weekly earnings, 2015 £s, corrected for inflation with CPI. ONS: 11 July 2023 release.

I’ve written a lot about the causes of the productivity slowdown and possible policy options to address it, reflecting my own perspectives on the importance of innovation and on redressing the UK’s regional economic imbalances. Here I just make two points.

On diagnosis, I think it’s really important to note the mid-2000s timing of the break in the productivity curve. Undoubtedly subsequent policy mistakes have made things worse, but I believe a fundamental analysis of the UK’s problems must recognise that the roots of the crisis go back a couple of decades.

On remedies, I think it should be obvious that if we carry on doing the same sorts of things in the same way, we can expect the same results. Token, sub-scale interventions will make no difference without a serious rethinking of the UK’s fundamental economic model.

When did the UK’s productivity slowdown begin?

The UK is now well into a second decade of sluggish productivity growth, with far-reaching consequences for people’s standard of living, for the sustainability of public services, and (arguably) for the wider political environment. It has become usual to date the beginning of this new period of slow productivity growth to the global financial crisis around 2008, but I increasingly suspect that the roots of the malaise were already in place earlier in the 2000s.


UK Labour productivity. Data: ONS, Output per hour worked, chained volume measure, 7 July 2023 release. Fit: non-linear least squares fit to two exponential growth functions, continuous at break point. Best fit break point is 2004.9.

My plot shows the latest release of whole-economy quarterly productivity data from the ONS. I have fitted the data to a function representing two periods of exponential growth, with different time constants, constrained to be continuous at a time of break. There are four fitting parameters in this function – the two time constants, the level at the break point, and the time of break. My best fit shows a break point at 2004.9.


Residuals for the fit to the quarterly productivity data shown above.

The plot of the residuals to the fit is shown above. This shows that the goodness of fit is comparable across the whole time range (with the exception of the spikes representing the effect of the pandemic). There are deviations from the fit corresponding to the effect of booms and recessions, but the deviations around the time of the Global financial crisis are comparable with those in earlier boom/bust cycles.

How sensitive is the fit to the timing of the break point? I’ve redone the fits constraining the year of the break point, and calculated at each point the normalised chi-squares (i.e. the sum of the squared differences between data and model, divided by the number of data points). This is shown below.


Normalised chi-squared – i.e. sum of the squares of the differences between productivity data and the two exponential model, for fits where the time of break is constrained.

The goodness of fit varies smoothly around an optimum value of the time of break near 2005. A time of break at 2008 produces a materially worse quality of fit.

Can we quantify this further and attach a probability distribution to the year of break? I don’t think so using this approach – we have no reason to suppose that the deviations between model and fit are drawn from a Gaussian, which would be the assumption underlying traditional approaches to ascribing confidence limits to the fitting parameters. I believe there are Bayesian approaches to addressing this problem, and I will look into those for further work.

But for now, this leaves us with a hypothesis that the character of the UK economy, and the global context in which it operated, had already made the transition to a low productivity growth state by the mid-2000’s. In this view, the financial crisis was a symptom, not a cause, of the productivity slowdown.

As times change, the UK’s R&D landscape needs to change too

I took part in a panel discussion last Thursday at the Royal Society, about the UK’s R&D landscape. The other panelists were Anna Dickinson from the think tank Onward, and Ben Johnson, Policy Advisor at the Department of Science, Innovation and Technology, and our chair was Athene Donald. This is a much expanded and tidied version of my opening remarks.

What is the optimum shape of the research and development landscape for the UK? The interesting and important questions here are:

  • What kind of R&D is being done?
  • In what kind of institution is R&D being done?
  • What kind of people do R&D?
  • Who sets the priorities?
  • Who pays for it?

I’m a physicist, but I want to start with lessons from history and geography.

If there’s one lesson we should learn from history, it’s that the way things are now, isn’t the way they always have been. And we should be curious about different countries arrange their R&D landscapes – not just in the Anglophone countries and our European partners and neighbours that we are so familiar with, but in the East Asian countries that have been so economically successful recently.

The particular form that a nation’s R&D landscape takes arises from a set of political, economic circumstances, influenced by the outcome of ideological arguments that take place both within the science community and in wider society.

I’ve just read Iwan Rhys Morus’s fascinating and engaging book on 19th century Science and Technology: “How the Victorians took us to the Moon”. It’s fitting that the book begins with a discussion of just such an ideological debate – about the future orientation of the Royal Society after the death of Sir Joseph Banks in 1820, at the end of his autocratic – and aristocratic – 41 year rule over the Society. The R&D landscape that emerged from these struggles was the one appropriate for the United Kingdom in the Victorian era – a nation going through an industrial revolution, and acquiring an world empire. That landscape was dominated by men of science (and they were men), who believed, above all, in the idea of progress. They valued self-discipline, self-confidence, precision and systematic thinking, while sharing assumptions about gender, class and race that would no longer be acceptable in today’s world.

Morus argues that many of the attitudes, assumptions and institutions of the science community that led to the great technological advances of the 20th century were laid down in the Victorian period. As someone who received their training in one of those great Victorian institutions – Cambridge’s Cavendish Laboratory, that rings true to me. I vividly remember as a graduate student that the great physicist Sir Sam Edwards had a habit of dismissing some rival theorist with the words “it was all done by Lord Rayleigh”. Lots of it probably was.

But also, learning how to do science in the mid-1980’s, I was just at the tail end of another era – what David Edgerton calls the Warfare State. The UK was a nation in which science had been subservient to the defence needs of two world wars, and a Cold War in which technology was the front line. The state ran a huge defence research establishment, and an associated nuclear complex where the lines between civil nuclear power and the nuclear weapons programme were blurred. This was a corporatist world, in which the boundaries between big, national companies like GEC and ICI and the state were themselves not clear. And there was a lot of R&D being done – in 1980, the UK was one of the most R&D intensive countries in the world.

We live in a very different world now. R&D in the private sector still dominates, but now pretty much half of it is done in the labs of overseas owned multinationals. In a world in which R&D is truly globalised, it doesn’t make a lot of sense to talk about UK plc. There’s much more emphasis on the role of spin-outs and start-ups – venture capital supported companies based on protected intellectual property. This too is globalised – we agonise about how few of these companies, even when they are successful, stay to scale up in the UK rather than moving to Germany or the USA. The big corporate laboratories of the past, where use-inspired basic research co-existed with more applied work, are a shadow of their former selves or gone entirely, eroded by a new focus on shareholder value.

Meanwhile, we have seen UK governments systematically withdraw support from applied research, as Jon Agar’s work has documented. After a couple of decades in which university research had been squeezed, the 2000’s saw a significant increase in support through the research councils, but this came at the cost of continual erosion of public sector research establishments. This has left the research councils in a much more dominant position in the government funding landscape – the fraction of government R&D funding allocated through research councils has increased from about 12% in the mid-1980’s to around 30% now. But the biggest rise in government support for R&D has come through the non-specific subsidy for private sector – the R&D tax credit – whose cost rose from just over £1 billion in 2010 to more than £7 billion in 2019.

These dramatic changes in the R&D landscape that have unfolded between the 1980s and now should be understood in the context of the wider changes in the UK’s political economy over that period, often characterised as the dominance of neoliberalism and globalisation. There has been an insistence on the primacy of market mechanisms, and the full integration of the UK in a global free-trading environment, together with a rejection of any idea of state planning or industrial strategy. The shape of the UK economy changed very dramatically, with a dramatic shrinking of the manufacturing sector, the exacerbation of regional economic imbalances, and a persistent trade deficit with the rest of the world. The rise and fall of North Sea Oil and the development of a bubble in financial services has contributed to these trends.

The world looks very different now. The pandemic taught us that global supply chains can be very fragile in a crisis, while the Ukraine war reminded us that state security still, ultimately, depends on high technology and productive capacity. The slower crisis of climate change continues – we face a wrenching economic transition to move our energy economy to a zero carbon basis, while the already emerging effects of climate disruption will be challenging. In the UK, we have a failing economy, where productivity growth has flat-lined since 2008; the consequences are that wages have stagnated to a degree unprecedented in living memory and public services have deteriorated to politically unacceptable levels.

In place of globalisation, we see a retreat to trading blocks. Industrial strategy has returned to the USA at scale $50 bn from the CHIPS act to rebuild its semiconductor industry, and $370 bn for a green transition. The EU is responding. Of course in East Asia and China industrial strategy never went away.

So the question we face now is whether our R&D landscape is the right one for the times we live in now? I don’t think so. Of course, our values are different from those both of the Victorians and mid-20th century technocrats, and our circumstances are different too. Many of the assumptions of the post-1980’s political settlement are now in question. So how must the landscape evolve?

The new R&D landscape needs to more focused on the pressing problems we face: the net zero transition, the productivity slowdown, poor health outcomes, the security of the state. Here in the Royal Society, I don’t need to make the case for the importance of basic science, exploratory research, the unfettered inquiries of our most creative scientists. But in addition , we need more applied R&D, it needs to be more geographically dispersed, more inclusive. It has to build on the existing strengths of the country – but by itself that is not enough, and we will have to rebuild some of the innovation and manufacturing capacity that we have lost. And I think this manufacturing and innovation capacity is important for basic science too, because it’s this technological capacity that allows us to implement and benefit from the basic science. For example, one can be excited by the opportunities of quantum computing, but to make it work it’s probably going to rely on manufacturing technologies already implemented for semiconductors.

The national R&D landscape we have has evolved as the material conditions and ideological assumptions of the nation have changed, and as those conditions and assumptions change, so must the national R&D landscape change in response.

What the UK should – and should not – do about semiconductors

What should be in the long-delayed UK Semiconductor Strategy? My previous series of three blogposts set out the global context, the UK’s position in the global semiconductor world, some thoughts on the future directions of the industry, and some of the options open to the UK. Here, in summary, is a list of actions I think the UK should – and should not – take.

1. The UK should… (& there’s no excuse not to)

The UK government has committed to spending £700m on an exascale computer. It should specify that processor design should be from a UK design house. After decades of talking about using government procurement to drive innovation, the UK government should give it a try.

Why?
The UK has real competitive strength in processor design, and this sub-sector will become more and more important. AI demands exponentially more computing power, but the end of Moore’s law limits supply of computing power from hardware improvements, so design optimisation for applications like AI becomes more important than ever.

2. The UK should… (though it probably won’t, as it would be expensive, difficult, & ideologically uncomfortable)

The UK government should buy ARM outright from its current owner, SoftBank, and float it on the London Stock Exchange, while retaining a golden share to prevent a subsequent takeover by an overseas company.

Why?
ARM is the only UK-based company with internationally significant scale & reach into global semiconductor ecosystem. It’s the sole anchor company for the UK semiconductor industry. Ownership & control matters; ARM’s current overseas ownership makes it vulnerable to takeover & expatriation.

Why not?
It would cost >£50 bn upfront. Most of this money would be recovered in a subsequent sale, and the government might even make a profit, but some money would be at risk. But it’s worth comparing this with the precedent of the post GFC bank nationalisations, at a similar scale.

3. The UK should not… (& almost certainly not possible in any case)

The UK should not attempt to create a UK based manufacturing capability in leading edge logic chips. This would need to be done by one of the 3 international companies with the necessary technical expertise – TSMC, Intel or Samsung.

Why not?
A single leading edge fab costs >£10’s billions. The UK market isn’t anywhere near big enough to be attractive by itself, and the UK isn’t in a position to compete with the USA & Europe in a $bn’s subsidy race.

Moreover, decades of neglect of semiconductor manufacturing probably means the UK doesn’t, in any case, have the skills to operate a leading edge fab.

4. The UK should not…

The UK should not attempt to create UK based manufacturing capability in legacy logic chips, which are still crucial for industrial, automotive & defence applications. The lesser technical demands of these older technologies mean this would be more feasible than manufacturing leading edge chips.

Why not?
Manufacturing legacy chips is very capital intensive, and new entrants have to compete, in a brutally cyclical world market, with existing plants whose capital costs have already been depreciated. Instead, the UK needs to work with like-minded countries (especially in Europe) to develop secure supply chains.

5. Warrants another look

The UK could secure a position in some niche areas (e.g. compound semiconductors for power electronics, photonics and optoelectronics, printable electronics). Targeted support for R&D, innovation & skills, & seed & scale-up finance could yield regionally significant economic benefits.

6. How did we end up here, and what lessons should we learn?

The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

The UK can’t afford to make the same mistakes with future technology waves. We need a properly resourced, industrial strategy applied consistently over decades, growing & supporting UK owned, controlled & domiciled innovative-intensive firms at scale.

What should the UK do about semiconductors? (PDF version)

In anticipation of the UK government’s promised semiconductor strategy, my last three posts have summarised the global state of the industry, the UK’s position in that industry, and suggested what, realistically, the UK’s options are for a semiconductor strategy.

Here are links to all three parts, and for convenience a PDF version of the whole piece.

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry
Part 3: towards a UK Semiconductor Strategy.

PDF version (1 MB):
What should the UK do about semiconductors?

What should the UK do about semiconductors? Part 3: towards a UK Semiconductor Strategy

We are currently waiting for the UK government to publish its semiconductor strategy. As context for such a strategy, my previous two blogposts have summarised the global state of the industry:

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry

Here I consider what a realistic and useful UK semiconductor strategy might include.

To summarise the global context, the essential nations in advanced semiconductor manufacturing are Taiwan, Korea and the USA for making the chips themselves. In addition, Japan and the Netherlands are vital for crucial elements of the supply chain, particularly the equipment needed to make chips. China has been devoting significant resource to develop its own semiconductor industry – as a result, it is strong in all but the most advanced technologies for chip manufacture, but is vulnerable to being cut off from crucial elements of the supply chain.

The technology of chip manufacture is approaching maturity; the very rapid rates of increase in computing power we saw in the 1980s and 1990s, associated with a combination of Moore’s law and Dennard scaling, have significantly slowed. At the technology frontier we are seeing diminishing returns from the ever larger investments in capital and R&D that are needed to maintain advances. Further improvements in computer performance are likely to put more premium on custom designs for chips optimised for specific applications.

The UK’s position in semiconductor manufacturing is marginal in a global perspective, and not a relative strength in the context of the overall UK economy. There is actually a slightly stronger position in the wider supply chain than in chip manufacture itself, but the most significant strength is not in manufacture, but design, with ARM having a globally significant position and newcomers like Graphcore showing promise.

The history of the global semiconductor industry is a history of major government interventions coupled with very large private sector R&D spending, the latter driven by dramatically increasing sales. The UK essentially opted out of the race in the 1980’s, since when Korea and Taiwan have established globally leading positions, and China has become a fast expanding new entrant to the industry.

The more difficult geopolitical environment has led to a return of industrial strategy on a huge scale, led by the USA’s CHIPS Act, which appropriates more than $50 billion over 5 years to reestablish its global leadership, including $39 billion on direct subsidies for manufacturing.

How should the UK respond? What I’m talking about here is the core business of manufacturing semiconductor devices and the surrounding supply chain, rather than information and communication technology more widely. First, though, let’s be clear about what the goals of a UK semiconductor strategy could be.

What is a semiconductor strategy for?

A national strategy for semiconductors could have multiple goals. The UK Science and Technology Framework identifies semiconductors as one of five critical technologies, judged against criteria including their foundational character, market potential, as well as their importance for other national priorities, including national security.

It might be helpful to distinguish two slightly different goals for the semiconductor strategy. The first is the question of security, in the broadest sense, prompted by the supply problems that emerged in the pandemic, and heightened by the growing realisation of the importance and vulnerability of Taiwan in the global semiconductor industry. Here the questions to ask are, what industries are at risk from further disruptions? What are the national security issues that would arise from interruptions in supply?

The government’s latest refresh of its integrated foreign and defence strategy promises to “ensure the UK has a clear route to assured access for each [critical technology], a strong voice in influencing their development and use internationally, a managed approach to supply chain risks, and a plan to protect our advantage as we build it.” It reasserts as a model introduced in the previous Integrated Review the “own, collaborate, access” framework.

This framework is a welcome recognition of the the fact that the UK is a medium size country which can’t do everything, and in order to have access to the technology it needs, it must in some cases collaborate with friendly nations, and in others access technology through open global markets. But it’s worth asking what exactly is meant by “own”. This is defined in the Integrated Review thus: “Own: where the UK has leadership and ownership of new developments, from discovery to large-scale manufacture and commercialisation.”

In what sense does the nation ever own a technology? There are still a few cases where wholly state owned organisations retain both a practical and legal monopoly on a particular technology – nuclear weapons remain the most obvious example. But technologies are largely controlled by private sector companies with a complex, and often global ownership structure. We might think that the technologies of semiconductor integrated circuit design that ARM developed are British, because the company is based in Cambridge. But it’s owned by a Japanese investment bank, who have a great deal of latitude in what they do with it.

Perhaps it is more helpful to talk about control than ownership. The UK state retains a certain amount of control of technologies owned by companies with a substantial UK presence – it has been able in effect to block the purchase of the Newport Wafer Fab by the Chinese owned company Nexperia. But this new assertiveness is a very recent phenomenon; until very recently UK governments have been entirely relaxed about the acquisition of technology companies by overseas companies. Indeed, in 2016 ARM’s acquisition by Softbank was welcomed by the then PM, Theresa May, as being in the UK’s national interest, and a vote of confidence in post-Brexit Britain. The government has taken new powers to block acquisitions of companies through the National Security and Investment Act 2021, but this can only be done on grounds of national security.

The second goal of a semiconductor strategy is as part of an effort to overcome the UK’s persistent stagnation of economic productivity, to “generate innovation-led economic growth” , in the words of a recent Government response to a BEIS Select Committee report. As I have written about at length, the UK’s productivity problem is serious and persistent, so there’s certainly a need to identify and support high value sectors with the potential for growth. There is a regional dimension here, recognised in the government’s aspiration for the strategy to create “high paying jobs throughout the UK”. So it would be entirely appropriate for a strategy to support the existing cluster in the Southwest around Bristol and into South Wales, as well as to create new clusters where there are strengths in related industry sectors

The economies of Taiwan and Korea have been transformed by their very effective deployment of an active industrial strategy to take advantage of an industry at a time of rapid technological progress and expanding markets. There are two questions for the UK now. Has the UK state (and the wider economic consensus in the country) overcome its ideological aversion to active industrial strategy on the East Asian model to intervene at the necessary scale? And, would such an intervention be timely, given where semiconductors are in the technology cycle? Or, to put it more provocatively, has the UK left it too late to capture a significant share of a technology that is approaching maturity?

What, realistically, can the UK do about semiconductors?

What interventions are possible for the UK government in devising a semiconductor strategy that addresses these two goals – of increasing the UK’s economic and military security by reducing its vulnerability to shocks in the global semiconductor supply chain, and of improving the UK’s economic performance by driving innovation-led economic growth? There is a menu of options, and what the government chooses will depend on its appetite for spending money, its willingness to take assets onto its balance sheet, and how much it is prepared to intervene in the market.

Could the UK establish the manufacturing of leading edge silicon chips? This seems implausible. This is the most sophisticated manufacturing process in the world, enormously capital intensive and drawing on a huge amount of proprietary and tacit knowledge. The only way it could happen is if one of the three companies currently at or close to the technology frontier – Samsung, Intel or TSMC – could be enticed to establish a manufacturing plant in the UK. What would be in it for them? The UK doesn’t have a big market, it has a labour market that is high cost, yet lacking in the necessary skills, so its only chance would be to advance large direct subsidies.

In any case, the attention of these companies is elsewhere. TSMC is building a new plant in Arizona, at a cost of $40 billion, while Samsung’s new plant in Texas is costing $25 billion, with the US government using some of the CHIPS act money to subsidise these investments. Despite Intel’s well-reported difficulties, it is planning significant investment in Europe, supported by inducements from EU and its member states under the EU Chips act. Intel has committed €12 billion to expanding its operations in Ireland and €17 billion for a new fab in the existing semiconductor cluster in Saxony, Germany.

From the point of view of security of supply, it’s not just chips from the leading edge that are important; for many applications, in automobiles, defence and industrial machinery, legacy chips produced by processes that are no longer at the leading edge are sufficient. In principle establishing manufacturing facilities for such legacy chips would be less challenging than attempting to establish manufacturing at the leading edge. However, here, the economics of establishing new manufacturing facilities is very difficult. The cost of producing chips is dominated by the need to amortise the very large capital cost of setting up a fab, but a new plant would be in competition with long-established plants whose capital cost is already fully depreciated. These legacy chips are a commodity product.

So in practise, our security of supply can only be assured by reliance on friendly countries. It would have been helpful if the UK had been able to participate in the development of a European strategy to secure semiconductor supply chains, as Hermann Hauser has argued for. But what does the UK have to contribute, in the creation of more resilient supply chains more localised in networks of reliably friendly countries?

The UK’s key asset is its position in chip design, with ARM as the anchor firm. But, as a firm based on intellectual property rather than the big capital investments of fabs and factories, ARM is potentially footloose, and as we’ve seen, it isn’t British by ownership. Rather it is owned and controlled by a Japanese conglomerate, which needs to sell it to raise money, and will seek to achieve the highest return from such a sale. After the proposed sale to Nvidia was blocked, the likely outcome now is a floatation on the US stock market, where the typical valuations of tech companies are higher than they are in the UK.

The UK state could seek to maintain control over ARM by the device of a “Golden Share”, as it currently does with Rolls-Royce and BAE Systems. I’m not sure what the mechanism for this would be – I would imagine that the only surefire way of doing this would be for the UK government to buy ARM outright from Softbank in an agreed sale, and then subsequently float it itself with the golden share in place. I don’t suppose this would be cheap – the agreed price for the thwarted Nvidia take over was $66 billion. The UK government would then attempt to recoup as much of the purchase price as possible through a subsequent floatation, but the presence of the golden share would presumably reduce the market value of the remaining shares. Still, the UK government did spend £46 billion nationalising a bank.

What other levers does the UK have to consolidate its position in chip design? Intelligent use of government purchasing power is often cited as an ingredient of a successful industrial policy, and here there is an opportunity. The government made the welcome announcement in the Spring Budget that it would commit £900 m to build an exascale computer to create a sovereign capability in artificial intelligence. The procurement process for this facility should be designed to drive innovation in the design, by UK companies, of specialised processing units for AI with lower energy consumption.

A strong public R&D base is a necessary – but not sufficient – condition for an effective industrial strategy in any R&D intensive industry. As a matter of policy, the UK ran down its public sector research effort in mainstream silicon microelectronics, in response to the UK’s overall weak position in the industry. The Engineering and Physical Research Council announces on its website that: “In 2011, EPSRC decided not to support research aimed at miniaturisation of CMOS devices through gate-length reduction, as large non-UK industrial investment in this field meant such research would have been unlikely to have had significant national impact.” I don’t think this was – or is – an unreasonable policy given the realities of the UK’s global position. The UK maintains academic research strength in areas such III-V semiconductors for optoelectronics, 2-d materials such as graphene, and organic semiconductors, to give a few examples.

Given the sophistication of state of the art microelectronic manufacturing technology, for R&D to be relevant and translatable into commercial products it is important that open access facilities are available to allow the prototyping of research devices, and with pilot scale equipment to demonstrate manufacturability and facilitate scale-up. The UK doesn’t have research centres on the scale of Belgium’s IMEC, or Taiwan’s ITRI, and the issue is whether, given the shallowness of the UK’s industry base, there would be a customer base for such a facility. There are a number of university facilities focused on supporting academic researchers in various specialisms – at Glasgow, Manchester, Sheffield and Cambridge, to give some examples. Two centres are associated with the Catapult Network – The National Printable Electronics Centre in Sedgefield, and the Compound Semiconductor Catapult in South Wales.

This existing infrastructure is certainly insufficient to support an ambition to expand the UK’s semiconductor sector. But a decision to enhance this research infrastructure will need a careful and realistic evaluation of what niches the UK could realistically hope to build some presence in, building on areas of existing UK strength, and understanding the scale of investment elsewhere in the world.

To summarise, the UK must recognise that, in semiconductors, it is currently in a relatively weak position. For security of supply, the focus must be on staying close to like-minded countries like our European neighbours. For the UK to develop its own semiconductor industry further, the emphasis must be on finding and developing particular niches where the UK’s does have some existing strength to build on, and there is the prospect of rapidly growing markets. And the UK should look after its one genuine area of strength, in chip design.

Four lessons for industrial strategy

What should the UK do about semiconductors? Another tempting, but unhelpful, answer is “I wouldn’t start from here”. The UK’s current position reflects past choices, so to conclude, perhaps it’s worth drawing some more general lessons about industrial strategy from the history of semiconductors in the UK, and globally.

1. Basic research is not enough

The historian David Edgerton has observed that it is a long-running habit of the UK state to use research policy as a substitute for industrial strategy. Basic research is relatively cheap, compared to the expensive and time-consuming process of developing and implementing new products and processes. In the 1980’s, it became conventional wisdom that governments should not get involved in applied research and development, which should be left to private industry, and, as I recently discussed at length, this has profoundly shaped the UK’s research and development landscape. But excellence in basic research has not produced a competitive semiconductor industry.

The last significant act of government support for the semiconductor industry in the UK was the Alvey programme of the 1980s. The programme was not without some technical successes, but it clearly failed in its strategic goal of keeping the UK semiconductor industry globally competitive. As the official evaluation of the programme concluded in 1991 [1]: “Support for pre-competitive R&D is a necessary but insufficient means for enhancing the competitive performance of the IT industry. The programme was not funded or equipped to deal with the different phases of the innovation process capable of being addressed by government technology policies. If enhanced competitiveness is the goal, either the funding or scope of action should be commensurate, or expectations should be lowered accordingly”.

But the right R&D institutions can be useful; the experience of both Japan and the USA shows the value of industry consortia – but this only works if there is already a strong, R&D intensive industry base. The creation of TSMC shows that it is possible to create a global giant from scratch, and this emphasises the role of translational research centres, like Taiwan’s ITRI and Belgium’s IMEC. But to be effective in creating new businesses, such centres need to have a focus on process improvement and manufacturing, as well as discovery science.

2. Big is beautiful in deep tech.

The modern semiconductor industry is the epitome of “Deep Tech”: hard innovation, usually in the material or biological domains, demanding long term R&D efforts and large capital investments. For all the romance of garage-based start-ups, in a business that demands up-front capital investments in the $10’s of billions and annual research budgets on the scale of medium size nation states, one needs serious, large scale organisations to succeed.

The ownership and control of these organisations does matter. From a national point of view, it is important to have large firms anchored to the territory, whether by ownership or by significant capital investment that would be hard to undo, so ensuring the permanence of such firms is the legitimate business of government. Naturally, big firms often start as fast growing small ones, and the UK should make more effort to hang on to companies as they scale up.

3. Getting the timing right in the technology cycle

Technological progress is uneven – at any given time, one industry may be undergoing very dramatic technological change, while other sectors are relatively stagnant. There may be a moment when the state of technology promises a period of rapid development, and there is a matching market with the potential for fast growth. Firms that have the capacity to invest and exploit such “windows of opportunity”, to use David Sainsbury’s phrase, will be able to generate and capture a high and rising level of added value.

The timing of interventions to support such firms is crucial, and undoubtedly not easy, but history shows us that nations that are able to offer significant levels of strategic support at the right stage can see a material impact on their economic performance. The recent rapid economic growth of Korea and Taiwan is a case in point. These countries have gone beyond catch-up economic growth, to equal or surpass the UK, reflecting their reaching the technological frontier in high value sectors such as semiconductors. Of course, in these countries, there has been a much closer entanglement between the state and firms than UK policy makers are comfortable with.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

4. If you don’t choose sectors, sectors will choose you

In the UK, so-called “vertical” industrial strategy, where explicit choices are made to support specific sectors, have long been out of favour. Making choices between sectors is difficult, and being perceived to have made the wrong choices damages the reputation of individuals and institutions. But even in the absence of an explicitly articulated vertical industrial strategy, policy choices will have the effect of favouring one sector over another.

In the 1990s and 2000s, UK chose oil and gas and financial services over semiconductors, or indeed advanced manufacturing more generally. Our current economic situation reflects, in part, that choice.

[1] Evaluation of the Alvey Programme for Advanced Information Technology. Ken Guy, Luke Georghiou, et al. HMSO for DTI and SERC (1991)

What should the UK do about semiconductors? Part 1: the UK’s place in the semiconductor world

The UK government is currently in the process of writing a new strategy for semiconductors. This is the first of a series of three blogposts setting out the context for this strategy.

In this first part, I discuss the new global environment, in which a tenser geopolitical situation has revived a policy climate around the world which is much more favourable to large scale government interventions in the industry. I’ll sketch the global state of the semiconductor industry and try to quantify the UK’s position in the semiconductor world.

In the second part, I’ll discuss the past and future of semiconductors, mentioning some of the important past interventions by governments around the world that have shaped the current situation, and I’ll speculate on where the industry might be going in the future.

Finally, in the third part, I’ll ask where this leaves the UK, and speculate on what its semiconductor strategy might seek to achieve.

As recent events have shown, the semiconductor industry is one of the most strategically important industries in the world, so it’s going to be very important for the UK government to get its strategy right. But there are more general principles at stake. We’re at a moment when a worldwide consensus behind the ideas of free trade and laissez-faire economics is being rapidly replaced in the major economies of the world by much more interventionist, and assertively nationalist, industrial policies. This isn’t comfortable territory for the British state, so how it responds to this test case will be very telling.

War, Semiconductors and the CHIPS act

It’s been reported that Russia has been dismantling washing machines to extract their integrated circuits, for use in missiles. True or not, this story illustrates two important features of the modern world. Integrated circuits – silicon chips – are now ubiquitous and indispensable for modern living – they’re not just to be found in computers and mobile phones; they’re in automobiles, consumer durables, even toys. And modern precision-guided weapon systems depend on them, so with a European war entering its second year, their strategic importance couldn’t be more obvious.

If demand for integrated circuits and other semiconductors is ubiquitous, we’ve also been reminded that their supply isn’t secure. The pandemic led to severe supply chain disruptions, in turn leading to major losses of production in the global automobile industry. The manufacture of the most technically advanced integrated circuits is concentrated in a single company – TSMC – located in the contested territory of Taiwan. This dependence means that, if the People’s Republic of China invades Taiwan, the consequences to the world economy would be disastrous.

This is the context for the USA’s CHIPS and Science Act – a hugely significant, and expensive, government intervention to rebuild the USA’s manufacturing capacity in the most advanced semiconductors. Underlying this is a serious attempt to restore its own technological supremacy – and specifically, to maintain its technological superiority over China.

This is the return, at scale, of industrial strategy. The primary driving force, as it was in the 1950’s and 60’s, is geopolitics, but the economic and political dimensions are important too, with an emphasis on restoring manufacturing – and the good jobs it provides – to communities that have suffered from deindustrialisation. The Act provides for expenditures, over five years, of $39 billion on incentives to return more semiconductor manufacturing to the USA, $13.2 billion for additional research and development, and $10 billion to create regional innovation hubs in economically lagging parts of the country.

It’s worth stressing what an ideological about-turn this represents. An economic advisor to the first President Bush reputedly said “Potato chips, computer chips, what’s the difference? A hundred dollars of one or a hundred dollars of the other is still a hundred dollars”. This is a marvellously succinct expression of the neoliberal argument against sector-based industrial strategy. It’s now clear how naive this view was. Crisps weren’t about to see the most rapid period of technological progress in history, propelling those countries like Taiwan and Korea that took advantage of this opportunity, from middle income economies, into the ranks of rich countries at the technological frontier. And Frito-Lay doesn’t make missiles.

The European Union has responded with its own European Chips Act. This includes an €11 billion “Chips for Europe Initiative”, together with further coordination of R&D and education and skills initiatives. Most significantly, it proposes a relaxation of state aid rules, allowing member states to directly subsidise new manufacturing facilities in Europe.

How should the UK respond to this new environment? The government is preparing a Semiconductor Strategy, but this has been repeatedly delayed.

The global semiconductor industry

What are the products of the global semiconductor industry? The most high profile are enormously complex integrated circuits that power our personal computers, gaming stations and mobile phones, as well as driving the giant server farms that underly cloud computing. The most important component of modern electronics is the transistor, a solid state switch. A few transistors can be combined to make a logic gate – the basic unit of a computer; the way this is done is described as “complementary metal oxide silicon” – hence CMOS. An integrated circuit combines a number of transistors on a single piece of silicon – a chip. Different designs of integrated circuits produce central processing units (CPUs), graphical processing units (GPUs), and solid state memory.

The more transistors the chip has, the more computing power or the bigger the memory, so the history of microelectronics is a story of miniaturisation, with each generation of chips having more transistors on a single integrated circuit, as expressed by Moore’s law. A modern CPU (such as Apple’s M1, made by TSMC) has 16 billion transistors, each of which has dimensions measured in nanometers. These are made by the most sophisticated and precise manufacturing processes in the world, through the successive deposition of layers of different materials, at each stage etching the layers with patterns that define the components.

Only three companies in the world have the capability to operate at this technological frontier: the USA’s Intel, Korea’s Samsung, and Taiwan’s TSMC. In recent years, progress at Intel has stumbled, and TSMC has taken a commanding lead for the manufacturing the highest performance integrated circuits. TSMC focuses purely on manufacturing, making integrated circuits to the designs of so-called fabless companies, such as Nvidia. Intel, on the other hand, designs its own chips and manufactures them.

The scale of capital investment required to make these advanced circuits is breathtaking. TSMC is reported to have invested $60 billion in its facilities to manufacture chips at the 3 nm and 5 nm nodes. TSMC has been incentivised by the US government to establish production in Arizona, at a cost of $40 bn. These huge capital sums reflect the high cost of the ultra-sophisticated, high precision equipment required to pattern these circuits on the nanoscale. The frontier processes rely on the extreme-UV lithography systems made by the Dutch company ASML, a single unit of which may cost $150 million. Other important centres of equipment production include Japan and the USA.

There is still substantial demand for less advanced integrated circuits, for applications in cars, consumer durables, industrial machinery, weapons systems and much else. In addition to the three industry leaders, companies like Global Foundries, STMicro and NXP operate manufacturing plants in the USA, Europe and Singapore. China’s leading semiconductor company, Semiconductor Manufacturing International Corporation, falls into this category, though it has aspirations to reach the technological frontier, and is supported in this goal by China’s government.

Not all semiconductors are silicon. Other materials – compound semiconductors, such as Gallium Arsenide, and Gallium Nitride – are particularly important for optoelectronics; the business of converting electricity to light and back again. These are the materials from which solid state lasers and light emitting diodes are made ; familiar in everyday life as scanners in supermarkets and low energy light bulbs, but no less importantly the technologies which make the internet possible, converting electronic signals into the optical pulses that transmit information at huge rates through optical fibres.

The primary driving force for innovation in semiconductors has been information and communication technology – the desire for more powerful computers and the higher rates of data transmission that make possible today’s internet. But information processing isn’t the only important use of semiconductors. In power electronics, the focus is on the switching, amplifying and transformation of the much higher currents needed to drive electric motors. These technologies are rapidly growing in importance; the transition to a net zero greenhouse gas energy economy is going to be driven by the replacement of internal combustion engines by electric motors. The growth of electrical vehicles, the growing importance of renewable energy and the need for energy storage, all will drive the need to efficiently handle and transform high power electricity using light and efficient solid state devices.

The UK’s place in the semiconductor world

The UK is not a big player in the global semiconductor industry. Its exports of integrated circuits, worth $1.63 bn, represent 0.24% of the world’s trade; insignificant compared to the world’s leaders, Taiwan, China and Korea, whose exports are worth $138 bn, $120 bn, and $89.1 bn respectively. Outside the Far East, the USA exports $44.2 bn; it’s this relatively weak position relative to the East Asian countries that has prompted the measures of the CHIPS Act. In Europe, the leading exporters are Germany and Ireland, at $12.8 bn, and $11.2 bn respectively.

As mentioned above, the manufacture of integrated circuits is hugely capital intensive, so it’s important to look at the suppliers of the equipment used to make chips. The export trade here is dominated by Japan, the Netherlands and the USA, worth $12 bn, $11.7 bn, and $10.7 bn respectively. The UK has 1.06% of the world market, with exports worth $497m.

One other important component of the supply chain for chip manufacture are the chemicals and materials needed. These include the silicon single crystals from which the wafers are made, amongst the purest substances ever made by man, a wide range of industrial gases and solvents and reagents, all supplied at very high purity grades, and highly optimised speciality chemicals – e.g. the materials that make up the photoresists. This sector is dominated by Japan, with exports worth $4.23 bn worth, representing 29.5% of the world trade. Here the UK exports $212 m, a 1.48% share of the world market.

It’s worth reflecting on these figures in the context of the UK’s overall trade position. The total value of its exports in 2020 were $700 bn, made up of $371 bn in products, and $329 bn in services, so these three semiconductor-related sectors amount to about 6.3% of its total product exports. But as these figures emphasise, service sector exports are particularly important for the UK, and this bigger story is mirrored in the semiconductor sector.

The most significant semiconductor company in the UK doesn’t make any semiconductors – ARM designs chips, deriving its income from royalties and licensing fees for its intellectual property. Its revenues of $2.7 bn in 2021 would have made a significant contribution to the UK’s service exports (2020 UK service exports included $21.3 bn in royalties and license fees). Smaller companies, such as Imagination and Graphcore, are similarly focused on design rather than manufacturing.

In recent years, the question of ownership of ARM has achieved prominence. Originally a public company listed on the London Stock Exchange, ARM was acquired by the Japanese finance house SoftBank in 2016. A proposed sale to the US firm Nvidia collapsed last year after concerns from regulators in the UK, the USA and the EU that the acquisition would seriously reduce competion. SoftBank still remains keen to sell the company, so the future ownership and control of ARM remains in question.

Sources

All trade figures 2020 numbers, from the Observatory of Economic Complexity.

Up next: What should the UK do about semiconductors? Part 2: the past and future of the global semiconductor industry