Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

The shifting sands of UK Government technology prioritisation

In the last decade, the UK has had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle. This 3500 word piece looks at this history of instability in UK innovation policy, and suggests some principles of consistency and clarity which might give us some more stability in the decade to come. A PDF version can be downloaded here.

Introduction

The problem of policy churn has been identified in a number of policy areas as a barrier to productivity growth in the UK, and science and innovation policy is no exception to this. The UK can’t do everything – it represents less than 3% of the world’s R&D resources, so it needs to specialise. But recent governments have not found it easy to decide where the UK should put its focus, and then stick to those decisions.

In 2012 this the then Science Minister, David Willetts, launched an initiative which identified 8 priority technologies – the “Eight Great Technologies”. Willetts reflected on the fate of this initiative in a very interesting paper published last year. In short, while there has been consensus on the need for the UK to focus (with the exception of one short period), the areas of focus have been subject to frequent change.

Substantial changes in direction for technology policy have occurred despite the fact that we’ve had a single political party in power since 2010, with particular instability since 2015, in the period of Conservative majority government. Since 2012, the average life-span of an innovation policy has been about 2.5 years. Underneath the headline changes, it is true that there have been some continuities. But given the long time-scales needed to establish research programmes and to carry them through to their outcomes, this instability makes it different to implement any kind of coherent strategy.

Shifting Priorities: from “Eight Great Technologies”, through “Seven Technology Families”, to “Five Critical Technologies”

Table 1 summarises the various priority technologies identified in government policy since 2012, grouped in a way which best brings out the continuities (click to enlarge).

The “Eight Great Technologies” were introduced in 2012 in a speech to the Royal Society by the then Chancellor of the Exchequer, George Osborne; a paper by David Willetts expanded on the rationale for the choices . The 2014 Science and Innovation Policy endorsed the “Eight Great Technologies”, with the addition of quantum technology, which, following an extensive lobbying exercise, had been added to the list in the 2013 Autumn Statement.

2015 brought a majority Conservative government, but continuity in the offices of Prime Minister and Chancellor of the Exchequer didn’t translate into continuity in innovation policy. The new Secretary of State in the Department of Business, Innovation and Skills was Sajid Javid, who brought to the post a Thatcherite distrust of anything that smacked of industrial strategy. The main victim of this world-view was the innovation agency Innovate UK, which was subjected to significant cut-backs, causing lasting damage.

This interlude didn’t last very long – after the Brexit referendum, David Cameron’s resignation and the premiership of Theresa May, there was an increased appetite for intervention in the economy, coupled with a growing consciousness and acknowledgement of the UK’s productivity problem. Greg Clark (a former Science Minister) took over at a renamed and expanded Department of Business, Energy and Industrial Strategy.

A White Paper outlining a “modern industrial strategy” was published in 2017. Although it nodded to the “Eight Great Technologies”, the focus shifted to four “missions”. Money had already been set aside in the 2016 Autumn Statement for an “Industrial Strategy Challenge Fund” which would support R&D in support of the priorities that emerged from the Industrial Strategy.

2019 saw another change of Prime Minister – and another election, which brought another Conservative government, with a much greater majority, and a rather interventionist manifesto that promised substantial increases in science funding, including a new agency modelled on the USA’s ARPA, and a promise to “focus our efforts on areas where the UK can generate a commanding lead in the industries of the future – life sciences, clean energy, space, design, computing, robotics and artificial intelligence.”

But the “modern industrial strategy” didn’t survive long into the new administration. The new Secretary of State was Kwasi Kwarteng, from the wing of the party with an ideological aversion to industrial strategy. In 2021, the industrial strategy was superseded by a Treasury document, the Plan for Growth, which, while placing strong emphasis on the importance of innovation, took a much more sector and technology agnostic approach to its support. The Plan for Growth was supported by a new Innovation Strategy, published later in 2021. This did identify a new set of priority technologies – “Seven Technology Families”.

2022 was the year of three Prime Ministers. Liz Truss’s hard-line free market position was certainly unfriendly to the concept of industrial strategy, but in her 44 day tenure as Prime Minister there was not enough time to make any significant changes in direction to innovation policy.

Rishi Sunak’s Premiership brought another significant development, in the form of a machinery of government change reflecting the new Prime Minister’s enthusiasm for technology. A new department – the Department for Innovation, Science and Technology – meant that there was now a cabinet level Secretary of State focused on science. Another significant evolution in the profile of science and technology in government was the increasing prominence of national security as a driver of science policy.

This had begun in the 2021 Integrated Review , which was an attempt to set a single vision for the UK’s place in the world, covering security, defence, development and foreign policy. This elevated “Sustaining strategic advantage through science and technology” as one of four overarching principles. The disruptions to international supply chains during the covid pandemic, and the 2022 invasion of Ukraine by Russia and the subsequent large scale European land war, raised the issue of national security even higher up the political agenda.

A new department, and a modified set of priorities, produced a new 2023 strategy – the Science & Technology Framework – taking a systems approach to UK science & technology . This included a new set of technology priorities – the “Five critical technologies”.

Thus in a single decade, we’ve had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle.

Continuities and discontinuities

There are some continuities in substance in these technology priorities. Quantum technology appeared around 2013 as an addendum to the “Eight Great Technologies”, and survives into the current “Five Critical Technologies”. Issues of national security are a big driver here, as they are for much larger scale programmes in the USA and China.

In a couple of other areas, name changes conceal substantial continuity. What was called synthetic biology in 2012 is now encompassed in the field of engineering biology. Artificial Intelligence has come to high public prominence today, but it is a natural evolution of what used to be called big data, driven by technical advances in machine learning, more computer power, and bigger data sets.

Priorities in 2017 were defined as Grand Challenges, not Technologies. The language of challenges is taken up in the 2021 Innovation Strategy, which proposes a suite of Innovation Missions, distinct from the priority technology families, to address major societal challenges, in areas such as climate change, public health, and intractable diseases. The 2023 Science and Technology Framework, however, describes investments in three of the Five Critical Technologies, engineering biology, artificial intelligence, and quantum technologies, as “technology missions”, which seems to use the term in a somewhat different sense. There is room for more clarity about what is meant by a grand challenge, a mission, or a technology, which I will return to below.

Another distinction that is not always clear is between technologies and industry sectors. Both the Coalition and the May governments had industrial strategies that explicitly singled out particular sectors for support, including through support for innovation. These are listed in table 2. But it is arguable that at least two of the Eight Great Technologies – agritech, and space & satellites – would be better thought of as industry sectors rather than technologies.

Table 2 – industrial strategy sectors, as defined by the Coalition, and the May government.

The sector approach did underpin major applied public/private R&D programmes (such as the Aerospace Technology Institute, and the Advanced Propulsion Centre), and new R&D institutions, such as the Offshore Renewable Catapult Centre, designed to support specific industry sectors. Meanwhile, under the banner of Life Sciences, there is continued explicit support from the pharmaceutical and biotech industry, though here there is a lack of clarity about whether the primary goal is to promote the health of citizens through innovation support to the health and social care system, or to support pharma and biotech as high value, exporting, industrial sectors.

But two of the 2023 “five critical technologies” – semiconductors and future telecoms – are substantially new. Again, these look more like industrial sectors than technologies, and while no one can doubt their strategic importance in the global economy it isn’t obvious that the UK has a particularly strong comparative advantage in them, either in the size of the existing business base or the scale of the UK market (see my earlier discussion of the background to a UK Semiconductor Strategy).

The story of the last ten years, then, is a lack of consistency, not just in the priorities themselves, but in the conceptual basis for making the prioritisation – whether challenges or missions, industry sectors, or technologies.

From strategy to implementation

How does one turn from strategy to implementation: given a set of priority sectors, what needs to happen to turn these into research programmes, and then translate that research into commercial outcomes? An obvious point that nonetheless needs stressing, is that this process has long lead times, and this isn’t compatible with innovation strategies that have an average lifetime of 2.5 years.

To quote the recent Willetts review of the business case process for scientific programmes: “One senior official estimated the time from an original idea, arising in Research Councils, to execution of a programme at over two and a half years with 13 specific approvals required.” It would obviously be desirable to cut some of the bureaucracy that causes such delays, but it is striking that the time taken to design and initiate a research programme is of the same order as the average lifetime of an innovation strategy.

One data point here is the fate of the Industrial Strategy Challenge Fund. This was announced in the 2016 Autumn Statement, anticipating the 2017 Industrial Strategy White Paper, and exists to support translational research programmes in support of that Industrial Strategy. As we have seen, this strategy was de-emphasised in 2019, and formally scrapped in 2021. Yet the research programmes set up to support it are still going, with money still in the budget to be spent in FY 24/25.

Of course, much worthwhile research will be being done in these programmes, so the money isn’t wasted; the problem is that such orphan programmes may not have any follow-up, as new programmes on different topics are designed to support the latest strategy to emerge from central government.

Sometimes the timescales are such that there isn’t even a chance to operationalise one strategy before another one arrives. The major public funder of R&D, UKRI, produced a five year strategy in March 2022 , which was underpinned by the seven technology families. To operationalise this strategy, UKRI’s constituent research councils produced a set of delivery plans . These were published in September 2022, giving them a run of six months before the arrival of the 2023 Science and Innovation Framework, with its new set of critical technologies.

A natural response of funding agencies to this instability would be to decide themselves what best to do, and then do their best to retro-fit their ongoing programmes to new government strategies as they emerge. But this would defeat the point of making a strategy in the first place.

The next ten years

How can we do better over the next decade? We need to focus on consistency and clarity.

Consistency means having one strategy that we stick to. If we have this, investors can have confidence in the UK, research institutions can make informed decisions about their own investments, and individual researchers can plan their careers with more confidence.

Of course, the strategy should evolve, as unexpected developments in science and technology appear, and as the external environment changes. And it should build on what has gone before – for example, there is much of value in the systems approach of the 2023 Science and Innovation Framework.

There should be clarity on the basis for prioritisation. I think it is important to be much clearer about what we mean by Grand Challenges, Missions, Industry Sectors, and Technologies, and how they differ from each other. With sharper definitions, we might find it easier to establish clear criteria for prioritisation.

For me, Grand Challenges establish the conditions we are operating under. Some grand challenges might include:

  • How to move our energy economy to a zero-carbon basis by 2050;
  • How to create an affordable and humane health and social care system for an ageing population;
  • How to restore productivity growth to the UK economy and reduce the UK’s regional disparities in economic performance;
  • How to keep the UK safe and secure in an increasingly unstable and hostile world.

One would hope that there was a wide consensus about the scale of these problems, though not everyone will agree, nor will it always be obvious, what the best way of tackling them is.

Some might refer to these overarching issues as missions, using the term popularised by Mariana Mazzacuto , but I would prefer to refer to a mission as something more specific, with a sense of timescale and a definite target. The 1960’s Moonshot programme is often taken as an exemplar, though I think the more significant mission from that period was to create the ability for the USA to land a half tonne payload anywhere on the earth’s surface, with an accuracy of a few hundred meters or better.

The crucial feature of a mission, then, is that it is a targeted program to achieve a strategic goal of the state, that requires both the integration and refinement of existing technologies and the development of new ones. Defining and prioritising missions requires working across the whole of government, to identify the problems that the state needs to be solved, and that are tractable enough given reasonable technology foresight to be worth trying, and prioritising them.

The key questions for a judging missions, then, are, how much does the government want this to happen, how feasible is it given foreseeable technology, how well equipped is the UK to deliver it given its industrial and research capabilities, and how affordable is it?

For supporting an industry sector, though, the questions are different. Sector support is part of an active industrial strategy, and given the tendency of industry sectors to cluster in space, this has a strong regional dimension. The goals of industrial strategy are largely economic – to raise the economic productivity of a region or the nation – so the criteria for selecting sectors should be based on their importance to the economy in terms of the fraction of GVA that they supply, and their potential to improve productivity.

In the past industrial strategy has often been driven by the need to create jobs, but our current problem is productivity, rather than unemployment, so I think the key criteria for selecting sectors should be their potential to create more value through the application of innovation and the development of skills in their workforces.

In addition to the economic dimension, there may also be a security aspect to the choice, if there is a reason to suppose that maintaining capability in a particular sector is vital to national security. The 2021 nationalisation of the steel forging company, Sheffield Forgemasters, to secure the capability to manufacture critical components for the Royal Navy’s submarine fleet, would have been unthinkable a decade ago.

Industrial strategy may involve support for innovation, for example through collaborative programmes of pre-competitive research. But it needs to be broader than just research and development; it may involve developing institutions and programmes for innovation diffusion, the harnessing of public procurement, the development of specialist skills provision, and at a regional level, the provision of infrastructure.

Finally, on what basis should we choose a technology to focus on? By a technology priority, we refer to an emerging capability arising from new science, that could be adopted by existing industry sectors, or could create new, disruptive sectors. Here an understanding of the international research landscape, and the UK’s part of that, is a crucial starting point. Even the newest technology, to be implemented, depends on existing industrial capability, so the shape of the existing UK industrial base does need to be taken account. Finally, one shouldn’t underplay the importance of the vision of talented and driven individuals.

This isn’t to say that priorities for the whole of the science and innovation landscape need to be defined in terms of challenges, missions, and industry sectors.
A general framework for skills, finance, regulation, international collaboration, and infrastructure – as set out by the recent Science & Innovation Framework – needs to underlie more specific prioritisation. Maintaining the health of the basic disciplines is important to provide resilience in the face of the unanticipated, and it is important to be open to new developments and maintain agility in responding to them.

The starting point for a science and innovation strategy should be to realise that, very often, science and innovation shouldn’t be the starting point. Science policy is not the same as industrial strategy, even though it’s often used as a (much cheaper) substitute for it. For challenges and missions, defining the goals must come first; only then can one decide what advances in science and technology are needed to bring those in reach. Likewise, in a successful industrial strategy, close engagement with the existing capabilities of industry and the demands of the market are needed to define the areas of science and innovation that will support the development of a particular industry sector.

As I stressed in my earlier, comprehensive, survey of the UK Research and Development landscape, underlying any lasting strategy needs to be a settled, long-term view of what kind of country the UK aspires to be, what kind of economy it should have, and how it sees its place in the world.

Should Cambridge double in size?

The UK’s economic geography, outside London, is marked by small, prosperous cities in the south and east, and large, poor cities everywhere else. This leads to a dilemma for policy makers – should we try and make the small, successful, cities, bigger, or do the work needed to make our big cities more successful? The government’s emphasis seems to have swung back to expanding successful places in the South and East, with a particular focus on Cambridge.

Cambridge is undoubtedly a great success story for the UK, and potentially a huge national asset. Decades of investment by the state in research has resulted in an exemplary knowledge-based economy, where that investment in public R&D attracts in private sector R&D in even greater proportion. Cambridge has expanded recently, developing a substantial life science campus around the south of the city, moving engineering and physical sciences research to the West Cambridge site, and developing a cluster of digital businesses around the station. But its growth is constrained by poor infrastructure (water being a particular problem), aesthetic considerations in a historic city centre (which effectively rule out high rise buildings), and the political barriers posed by wealthy and influential communities who oppose growth.

We need an economic reality check too. How much economic difference would it make, on a national scale, if Cambridge did manage to double in size – and what are the alternatives? Here’s a very rough stab at some numbers.

The gross value added per person in Cambridge was £49,000 in 2018, well above the UK average of £29,000 [1]. In Greater Manchester, by contrast, GVA per person was about £25,000, well below the UK average. This illustrates the’s UK unusual and sub-optimal economic geography – in most countries, it’s the big cities that drive the economy. In contrast, in the UK, big second tier cities, like Manchester, Birmingham, Leeds and Glasgow, underperform economically and in effect drag the economy down.

Let’s do the thought experiment where we imagine Cambridge doubles its population, from 126,000 to 252,000, taking those people from Greater Manchester’s population of 2.8 million, and assuming that they are able to add the same average GVA per person to the Cambridge economy. Since the GVA per head in Cambridge is so much higher than in GM, this would raise national GVA by about £3 billion.

In the overall context of the UK’s economy, with a total GVA of £1,900 billion, £3 billion doesn’t make a material difference. The trouble with small cities is that they are small – so, no matter how successful economically they are, even doubling their size doesn’t make much of an impact at a national scale.

As an alternative to doubling the size of Cambridge, we could raise the productivity of Greater Manchester. To achieve a £3 billion increase in GM’s output, we’d need to raise the GVA per person by just over 4.2%, to a bit more than £26,000 – still below the UK average.

That’s the importance of trying to raise the productivity of big cities – they are big. Relatively marginal improvements in productivity in Greater Manchester, Leeds, Birmingham and the West Midlands, Sheffield, Glasgow and Cardiff could cumulatively start to make a material difference to the economy on a national scale. And we know where those improvements need to be made – for example in better public transport, more R&D and support for innovative businesses, providing the skills that innovative businesses need, by addressing poor housing and public health.

I do think Cambridge should be encouraged and supported to expand, to accommodate the private sector businesses that want to take advantage of the public investment in R&D that’s happened there, and to give the people they need to work for them somewhere affordable to live.

But, as Tom Forth and I have argued in detail elsewhere, we need more centres of R&D and innovation outside the Greater Southeast, particularly in those places where the private sector already makes big investments in R&D that aren’t supported by the public sector. The government has already made a commitment, in the Levelling Up White Paper, to increase public investment in R&D outside the Greater Southeast by a third by 2025. That commitment needs to be delivered, and built on by the next government.

Finally, we should ask ourselves whether we are fully exploiting the great assets that have been built in Cambridge, not just to support the economy of a small city in East Anglia, but to drive the economy of the whole nation. How could we make sure that if a Cambridge semiconductor spin-out is expanding, it builds its factory in Newport, Gwent, rather than Saxony or Hsinchu? How can we use the huge wealth of experience in the Cambridge venture capital community to support nascent VC sectors in places like Leeds? How could we make sure a Cambridge biotech spin-out does its clinical trials in Greater Manchester [2], and then then manufactures its medicine in Cheshire or on Merseyside?

Two things are needed to make this happen. Firstly, we need place-based industrial strategies to build the innovation, skills and manufacturing capacity in relevant sectors in other parts of the UK, so these places have the absorptive capacity to make the most of innovations emerging from Cambridge. Then, we need to build institutional links between the key organisations in Cambridge and those in other emerging regional centres. In this way, we could take full advantage of Cambridge’s position as a unique national asset.

[1]. Data here is taken from the ONS’s Regional Gross Value Added (balanced) dataset and mid-year population estimates, in both cases using 2018 data. The data for local authority areas on a workplace basis, but populations are for residents. This probably flatters the productivity number for Cambridge, as it doesn’t take account of people who live in neighbouring areas and commute into the city.

At another limit, one could ask what would happen if you doubled the population of the whole county of Cambridgeshire, 650,000. As the GVA per head at the county level is £31.5k, quite a lot less than the figure for Cambridge city, this makes surprisingly little difference to the overall result – this would increase GVA by £3.15 bn, the same as a 4.2% increase in GM’s productivity.

Of course, this poses another question – why the prosperity of Cambridge city doesn’t spill over very far into the rest of the county. Anyone who regularly uses the train from Cambridge via Ely and March to Peterborough might have a theory about that.

[2]. The recent government report on commercial clinical trials in the UK, by Lord O’Shaughnessy, highlighted a drop in patients enrolled in commercial clinical trials in the UK of 36% over the last six years. This national trend has been bucked in Greater Manchester, where there has been an increase of 19% in patient recruitment, driven by effective partnership between the NIHR Greater Manchester Clinical Research Network, the GM devolved health and social care system, industry and academia.

When did the UK’s productivity slowdown begin?

The UK is now well into a second decade of sluggish productivity growth, with far-reaching consequences for people’s standard of living, for the sustainability of public services, and (arguably) for the wider political environment. It has become usual to date the beginning of this new period of slow productivity growth to the global financial crisis around 2008, but I increasingly suspect that the roots of the malaise were already in place earlier in the 2000s.


UK Labour productivity. Data: ONS, Output per hour worked, chained volume measure, 7 July 2023 release. Fit: non-linear least squares fit to two exponential growth functions, continuous at break point. Best fit break point is 2004.9.

My plot shows the latest release of whole-economy quarterly productivity data from the ONS. I have fitted the data to a function representing two periods of exponential growth, with different time constants, constrained to be continuous at a time of break. There are four fitting parameters in this function – the two time constants, the level at the break point, and the time of break. My best fit shows a break point at 2004.9.


Residuals for the fit to the quarterly productivity data shown above.

The plot of the residuals to the fit is shown above. This shows that the goodness of fit is comparable across the whole time range (with the exception of the spikes representing the effect of the pandemic). There are deviations from the fit corresponding to the effect of booms and recessions, but the deviations around the time of the Global financial crisis are comparable with those in earlier boom/bust cycles.

How sensitive is the fit to the timing of the break point? I’ve redone the fits constraining the year of the break point, and calculated at each point the normalised chi-squares (i.e. the sum of the squared differences between data and model, divided by the number of data points). This is shown below.


Normalised chi-squared – i.e. sum of the squares of the differences between productivity data and the two exponential model, for fits where the time of break is constrained.

The goodness of fit varies smoothly around an optimum value of the time of break near 2005. A time of break at 2008 produces a materially worse quality of fit.

Can we quantify this further and attach a probability distribution to the year of break? I don’t think so using this approach – we have no reason to suppose that the deviations between model and fit are drawn from a Gaussian, which would be the assumption underlying traditional approaches to ascribing confidence limits to the fitting parameters. I believe there are Bayesian approaches to addressing this problem, and I will look into those for further work.

But for now, this leaves us with a hypothesis that the character of the UK economy, and the global context in which it operated, had already made the transition to a low productivity growth state by the mid-2000’s. In this view, the financial crisis was a symptom, not a cause, of the productivity slowdown.

What the UK should – and should not – do about semiconductors

What should be in the long-delayed UK Semiconductor Strategy? My previous series of three blogposts set out the global context, the UK’s position in the global semiconductor world, some thoughts on the future directions of the industry, and some of the options open to the UK. Here, in summary, is a list of actions I think the UK should – and should not – take.

1. The UK should… (& there’s no excuse not to)

The UK government has committed to spending £700m on an exascale computer. It should specify that processor design should be from a UK design house. After decades of talking about using government procurement to drive innovation, the UK government should give it a try.

Why?
The UK has real competitive strength in processor design, and this sub-sector will become more and more important. AI demands exponentially more computing power, but the end of Moore’s law limits supply of computing power from hardware improvements, so design optimisation for applications like AI becomes more important than ever.

2. The UK should… (though it probably won’t, as it would be expensive, difficult, & ideologically uncomfortable)

The UK government should buy ARM outright from its current owner, SoftBank, and float it on the London Stock Exchange, while retaining a golden share to prevent a subsequent takeover by an overseas company.

Why?
ARM is the only UK-based company with internationally significant scale & reach into global semiconductor ecosystem. It’s the sole anchor company for the UK semiconductor industry. Ownership & control matters; ARM’s current overseas ownership makes it vulnerable to takeover & expatriation.

Why not?
It would cost >£50 bn upfront. Most of this money would be recovered in a subsequent sale, and the government might even make a profit, but some money would be at risk. But it’s worth comparing this with the precedent of the post GFC bank nationalisations, at a similar scale.

3. The UK should not… (& almost certainly not possible in any case)

The UK should not attempt to create a UK based manufacturing capability in leading edge logic chips. This would need to be done by one of the 3 international companies with the necessary technical expertise – TSMC, Intel or Samsung.

Why not?
A single leading edge fab costs >£10’s billions. The UK market isn’t anywhere near big enough to be attractive by itself, and the UK isn’t in a position to compete with the USA & Europe in a $bn’s subsidy race.

Moreover, decades of neglect of semiconductor manufacturing probably means the UK doesn’t, in any case, have the skills to operate a leading edge fab.

4. The UK should not…

The UK should not attempt to create UK based manufacturing capability in legacy logic chips, which are still crucial for industrial, automotive & defence applications. The lesser technical demands of these older technologies mean this would be more feasible than manufacturing leading edge chips.

Why not?
Manufacturing legacy chips is very capital intensive, and new entrants have to compete, in a brutally cyclical world market, with existing plants whose capital costs have already been depreciated. Instead, the UK needs to work with like-minded countries (especially in Europe) to develop secure supply chains.

5. Warrants another look

The UK could secure a position in some niche areas (e.g. compound semiconductors for power electronics, photonics and optoelectronics, printable electronics). Targeted support for R&D, innovation & skills, & seed & scale-up finance could yield regionally significant economic benefits.

6. How did we end up here, and what lessons should we learn?

The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

The UK can’t afford to make the same mistakes with future technology waves. We need a properly resourced, industrial strategy applied consistently over decades, growing & supporting UK owned, controlled & domiciled innovative-intensive firms at scale.

What should the UK do about semiconductors? (PDF version)

In anticipation of the UK government’s promised semiconductor strategy, my last three posts have summarised the global state of the industry, the UK’s position in that industry, and suggested what, realistically, the UK’s options are for a semiconductor strategy.

Here are links to all three parts, and for convenience a PDF version of the whole piece.

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry
Part 3: towards a UK Semiconductor Strategy.

PDF version (1 MB):
What should the UK do about semiconductors?

What should the UK do about semiconductors? Part 3: towards a UK Semiconductor Strategy

We are currently waiting for the UK government to publish its semiconductor strategy. As context for such a strategy, my previous two blogposts have summarised the global state of the industry:

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry

Here I consider what a realistic and useful UK semiconductor strategy might include.

To summarise the global context, the essential nations in advanced semiconductor manufacturing are Taiwan, Korea and the USA for making the chips themselves. In addition, Japan and the Netherlands are vital for crucial elements of the supply chain, particularly the equipment needed to make chips. China has been devoting significant resource to develop its own semiconductor industry – as a result, it is strong in all but the most advanced technologies for chip manufacture, but is vulnerable to being cut off from crucial elements of the supply chain.

The technology of chip manufacture is approaching maturity; the very rapid rates of increase in computing power we saw in the 1980s and 1990s, associated with a combination of Moore’s law and Dennard scaling, have significantly slowed. At the technology frontier we are seeing diminishing returns from the ever larger investments in capital and R&D that are needed to maintain advances. Further improvements in computer performance are likely to put more premium on custom designs for chips optimised for specific applications.

The UK’s position in semiconductor manufacturing is marginal in a global perspective, and not a relative strength in the context of the overall UK economy. There is actually a slightly stronger position in the wider supply chain than in chip manufacture itself, but the most significant strength is not in manufacture, but design, with ARM having a globally significant position and newcomers like Graphcore showing promise.

The history of the global semiconductor industry is a history of major government interventions coupled with very large private sector R&D spending, the latter driven by dramatically increasing sales. The UK essentially opted out of the race in the 1980’s, since when Korea and Taiwan have established globally leading positions, and China has become a fast expanding new entrant to the industry.

The more difficult geopolitical environment has led to a return of industrial strategy on a huge scale, led by the USA’s CHIPS Act, which appropriates more than $50 billion over 5 years to reestablish its global leadership, including $39 billion on direct subsidies for manufacturing.

How should the UK respond? What I’m talking about here is the core business of manufacturing semiconductor devices and the surrounding supply chain, rather than information and communication technology more widely. First, though, let’s be clear about what the goals of a UK semiconductor strategy could be.

What is a semiconductor strategy for?

A national strategy for semiconductors could have multiple goals. The UK Science and Technology Framework identifies semiconductors as one of five critical technologies, judged against criteria including their foundational character, market potential, as well as their importance for other national priorities, including national security.

It might be helpful to distinguish two slightly different goals for the semiconductor strategy. The first is the question of security, in the broadest sense, prompted by the supply problems that emerged in the pandemic, and heightened by the growing realisation of the importance and vulnerability of Taiwan in the global semiconductor industry. Here the questions to ask are, what industries are at risk from further disruptions? What are the national security issues that would arise from interruptions in supply?

The government’s latest refresh of its integrated foreign and defence strategy promises to “ensure the UK has a clear route to assured access for each [critical technology], a strong voice in influencing their development and use internationally, a managed approach to supply chain risks, and a plan to protect our advantage as we build it.” It reasserts as a model introduced in the previous Integrated Review the “own, collaborate, access” framework.

This framework is a welcome recognition of the the fact that the UK is a medium size country which can’t do everything, and in order to have access to the technology it needs, it must in some cases collaborate with friendly nations, and in others access technology through open global markets. But it’s worth asking what exactly is meant by “own”. This is defined in the Integrated Review thus: “Own: where the UK has leadership and ownership of new developments, from discovery to large-scale manufacture and commercialisation.”

In what sense does the nation ever own a technology? There are still a few cases where wholly state owned organisations retain both a practical and legal monopoly on a particular technology – nuclear weapons remain the most obvious example. But technologies are largely controlled by private sector companies with a complex, and often global ownership structure. We might think that the technologies of semiconductor integrated circuit design that ARM developed are British, because the company is based in Cambridge. But it’s owned by a Japanese investment bank, who have a great deal of latitude in what they do with it.

Perhaps it is more helpful to talk about control than ownership. The UK state retains a certain amount of control of technologies owned by companies with a substantial UK presence – it has been able in effect to block the purchase of the Newport Wafer Fab by the Chinese owned company Nexperia. But this new assertiveness is a very recent phenomenon; until very recently UK governments have been entirely relaxed about the acquisition of technology companies by overseas companies. Indeed, in 2016 ARM’s acquisition by Softbank was welcomed by the then PM, Theresa May, as being in the UK’s national interest, and a vote of confidence in post-Brexit Britain. The government has taken new powers to block acquisitions of companies through the National Security and Investment Act 2021, but this can only be done on grounds of national security.

The second goal of a semiconductor strategy is as part of an effort to overcome the UK’s persistent stagnation of economic productivity, to “generate innovation-led economic growth” , in the words of a recent Government response to a BEIS Select Committee report. As I have written about at length, the UK’s productivity problem is serious and persistent, so there’s certainly a need to identify and support high value sectors with the potential for growth. There is a regional dimension here, recognised in the government’s aspiration for the strategy to create “high paying jobs throughout the UK”. So it would be entirely appropriate for a strategy to support the existing cluster in the Southwest around Bristol and into South Wales, as well as to create new clusters where there are strengths in related industry sectors

The economies of Taiwan and Korea have been transformed by their very effective deployment of an active industrial strategy to take advantage of an industry at a time of rapid technological progress and expanding markets. There are two questions for the UK now. Has the UK state (and the wider economic consensus in the country) overcome its ideological aversion to active industrial strategy on the East Asian model to intervene at the necessary scale? And, would such an intervention be timely, given where semiconductors are in the technology cycle? Or, to put it more provocatively, has the UK left it too late to capture a significant share of a technology that is approaching maturity?

What, realistically, can the UK do about semiconductors?

What interventions are possible for the UK government in devising a semiconductor strategy that addresses these two goals – of increasing the UK’s economic and military security by reducing its vulnerability to shocks in the global semiconductor supply chain, and of improving the UK’s economic performance by driving innovation-led economic growth? There is a menu of options, and what the government chooses will depend on its appetite for spending money, its willingness to take assets onto its balance sheet, and how much it is prepared to intervene in the market.

Could the UK establish the manufacturing of leading edge silicon chips? This seems implausible. This is the most sophisticated manufacturing process in the world, enormously capital intensive and drawing on a huge amount of proprietary and tacit knowledge. The only way it could happen is if one of the three companies currently at or close to the technology frontier – Samsung, Intel or TSMC – could be enticed to establish a manufacturing plant in the UK. What would be in it for them? The UK doesn’t have a big market, it has a labour market that is high cost, yet lacking in the necessary skills, so its only chance would be to advance large direct subsidies.

In any case, the attention of these companies is elsewhere. TSMC is building a new plant in Arizona, at a cost of $40 billion, while Samsung’s new plant in Texas is costing $25 billion, with the US government using some of the CHIPS act money to subsidise these investments. Despite Intel’s well-reported difficulties, it is planning significant investment in Europe, supported by inducements from EU and its member states under the EU Chips act. Intel has committed €12 billion to expanding its operations in Ireland and €17 billion for a new fab in the existing semiconductor cluster in Saxony, Germany.

From the point of view of security of supply, it’s not just chips from the leading edge that are important; for many applications, in automobiles, defence and industrial machinery, legacy chips produced by processes that are no longer at the leading edge are sufficient. In principle establishing manufacturing facilities for such legacy chips would be less challenging than attempting to establish manufacturing at the leading edge. However, here, the economics of establishing new manufacturing facilities is very difficult. The cost of producing chips is dominated by the need to amortise the very large capital cost of setting up a fab, but a new plant would be in competition with long-established plants whose capital cost is already fully depreciated. These legacy chips are a commodity product.

So in practise, our security of supply can only be assured by reliance on friendly countries. It would have been helpful if the UK had been able to participate in the development of a European strategy to secure semiconductor supply chains, as Hermann Hauser has argued for. But what does the UK have to contribute, in the creation of more resilient supply chains more localised in networks of reliably friendly countries?

The UK’s key asset is its position in chip design, with ARM as the anchor firm. But, as a firm based on intellectual property rather than the big capital investments of fabs and factories, ARM is potentially footloose, and as we’ve seen, it isn’t British by ownership. Rather it is owned and controlled by a Japanese conglomerate, which needs to sell it to raise money, and will seek to achieve the highest return from such a sale. After the proposed sale to Nvidia was blocked, the likely outcome now is a floatation on the US stock market, where the typical valuations of tech companies are higher than they are in the UK.

The UK state could seek to maintain control over ARM by the device of a “Golden Share”, as it currently does with Rolls-Royce and BAE Systems. I’m not sure what the mechanism for this would be – I would imagine that the only surefire way of doing this would be for the UK government to buy ARM outright from Softbank in an agreed sale, and then subsequently float it itself with the golden share in place. I don’t suppose this would be cheap – the agreed price for the thwarted Nvidia take over was $66 billion. The UK government would then attempt to recoup as much of the purchase price as possible through a subsequent floatation, but the presence of the golden share would presumably reduce the market value of the remaining shares. Still, the UK government did spend £46 billion nationalising a bank.

What other levers does the UK have to consolidate its position in chip design? Intelligent use of government purchasing power is often cited as an ingredient of a successful industrial policy, and here there is an opportunity. The government made the welcome announcement in the Spring Budget that it would commit £900 m to build an exascale computer to create a sovereign capability in artificial intelligence. The procurement process for this facility should be designed to drive innovation in the design, by UK companies, of specialised processing units for AI with lower energy consumption.

A strong public R&D base is a necessary – but not sufficient – condition for an effective industrial strategy in any R&D intensive industry. As a matter of policy, the UK ran down its public sector research effort in mainstream silicon microelectronics, in response to the UK’s overall weak position in the industry. The Engineering and Physical Research Council announces on its website that: “In 2011, EPSRC decided not to support research aimed at miniaturisation of CMOS devices through gate-length reduction, as large non-UK industrial investment in this field meant such research would have been unlikely to have had significant national impact.” I don’t think this was – or is – an unreasonable policy given the realities of the UK’s global position. The UK maintains academic research strength in areas such III-V semiconductors for optoelectronics, 2-d materials such as graphene, and organic semiconductors, to give a few examples.

Given the sophistication of state of the art microelectronic manufacturing technology, for R&D to be relevant and translatable into commercial products it is important that open access facilities are available to allow the prototyping of research devices, and with pilot scale equipment to demonstrate manufacturability and facilitate scale-up. The UK doesn’t have research centres on the scale of Belgium’s IMEC, or Taiwan’s ITRI, and the issue is whether, given the shallowness of the UK’s industry base, there would be a customer base for such a facility. There are a number of university facilities focused on supporting academic researchers in various specialisms – at Glasgow, Manchester, Sheffield and Cambridge, to give some examples. Two centres are associated with the Catapult Network – The National Printable Electronics Centre in Sedgefield, and the Compound Semiconductor Catapult in South Wales.

This existing infrastructure is certainly insufficient to support an ambition to expand the UK’s semiconductor sector. But a decision to enhance this research infrastructure will need a careful and realistic evaluation of what niches the UK could realistically hope to build some presence in, building on areas of existing UK strength, and understanding the scale of investment elsewhere in the world.

To summarise, the UK must recognise that, in semiconductors, it is currently in a relatively weak position. For security of supply, the focus must be on staying close to like-minded countries like our European neighbours. For the UK to develop its own semiconductor industry further, the emphasis must be on finding and developing particular niches where the UK’s does have some existing strength to build on, and there is the prospect of rapidly growing markets. And the UK should look after its one genuine area of strength, in chip design.

Four lessons for industrial strategy

What should the UK do about semiconductors? Another tempting, but unhelpful, answer is “I wouldn’t start from here”. The UK’s current position reflects past choices, so to conclude, perhaps it’s worth drawing some more general lessons about industrial strategy from the history of semiconductors in the UK, and globally.

1. Basic research is not enough

The historian David Edgerton has observed that it is a long-running habit of the UK state to use research policy as a substitute for industrial strategy. Basic research is relatively cheap, compared to the expensive and time-consuming process of developing and implementing new products and processes. In the 1980’s, it became conventional wisdom that governments should not get involved in applied research and development, which should be left to private industry, and, as I recently discussed at length, this has profoundly shaped the UK’s research and development landscape. But excellence in basic research has not produced a competitive semiconductor industry.

The last significant act of government support for the semiconductor industry in the UK was the Alvey programme of the 1980s. The programme was not without some technical successes, but it clearly failed in its strategic goal of keeping the UK semiconductor industry globally competitive. As the official evaluation of the programme concluded in 1991 [1]: “Support for pre-competitive R&D is a necessary but insufficient means for enhancing the competitive performance of the IT industry. The programme was not funded or equipped to deal with the different phases of the innovation process capable of being addressed by government technology policies. If enhanced competitiveness is the goal, either the funding or scope of action should be commensurate, or expectations should be lowered accordingly”.

But the right R&D institutions can be useful; the experience of both Japan and the USA shows the value of industry consortia – but this only works if there is already a strong, R&D intensive industry base. The creation of TSMC shows that it is possible to create a global giant from scratch, and this emphasises the role of translational research centres, like Taiwan’s ITRI and Belgium’s IMEC. But to be effective in creating new businesses, such centres need to have a focus on process improvement and manufacturing, as well as discovery science.

2. Big is beautiful in deep tech.

The modern semiconductor industry is the epitome of “Deep Tech”: hard innovation, usually in the material or biological domains, demanding long term R&D efforts and large capital investments. For all the romance of garage-based start-ups, in a business that demands up-front capital investments in the $10’s of billions and annual research budgets on the scale of medium size nation states, one needs serious, large scale organisations to succeed.

The ownership and control of these organisations does matter. From a national point of view, it is important to have large firms anchored to the territory, whether by ownership or by significant capital investment that would be hard to undo, so ensuring the permanence of such firms is the legitimate business of government. Naturally, big firms often start as fast growing small ones, and the UK should make more effort to hang on to companies as they scale up.

3. Getting the timing right in the technology cycle

Technological progress is uneven – at any given time, one industry may be undergoing very dramatic technological change, while other sectors are relatively stagnant. There may be a moment when the state of technology promises a period of rapid development, and there is a matching market with the potential for fast growth. Firms that have the capacity to invest and exploit such “windows of opportunity”, to use David Sainsbury’s phrase, will be able to generate and capture a high and rising level of added value.

The timing of interventions to support such firms is crucial, and undoubtedly not easy, but history shows us that nations that are able to offer significant levels of strategic support at the right stage can see a material impact on their economic performance. The recent rapid economic growth of Korea and Taiwan is a case in point. These countries have gone beyond catch-up economic growth, to equal or surpass the UK, reflecting their reaching the technological frontier in high value sectors such as semiconductors. Of course, in these countries, there has been a much closer entanglement between the state and firms than UK policy makers are comfortable with.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

4. If you don’t choose sectors, sectors will choose you

In the UK, so-called “vertical” industrial strategy, where explicit choices are made to support specific sectors, have long been out of favour. Making choices between sectors is difficult, and being perceived to have made the wrong choices damages the reputation of individuals and institutions. But even in the absence of an explicitly articulated vertical industrial strategy, policy choices will have the effect of favouring one sector over another.

In the 1990s and 2000s, UK chose oil and gas and financial services over semiconductors, or indeed advanced manufacturing more generally. Our current economic situation reflects, in part, that choice.

[1] Evaluation of the Alvey Programme for Advanced Information Technology. Ken Guy, Luke Georghiou, et al. HMSO for DTI and SERC (1991)

What should the UK do about semiconductors? Part 2: the past and future of the global semiconductor industry

This is the second post in a series of three, considering the background to the forthcoming UK Government Semiconductor Strategy.

In the first part, The UK’s place in the semiconductor world, I discussed the new global environment, in which a tenser geopolitical situation has revived a policy climate around the world which is much more favourable to large scale government interventions in the industry, I sketched the global state of the semiconductor industry and tried to quantify the UK’s position in the semiconductor world.

Here, I discuss the past and future of semiconductors, mentioning some of the important past interventions by governments around the world that have shaped the current situation, and I speculate on where the industry might be going in the future.

Finally, in the third part, I’ll ask where this leaves the UK, and speculate on what its semiconductor strategy might seek to achieve.

Active industrial policy in the history of semiconductors

The history of the global semiconductor industry involves a dance between governments around the world and private companies. In contrast to the conviction of the predominantly libertarian ideology of Silicon Valley, the industry wouldn’t have come into existence and developed in the form we now know without a series of major, and expensive, interventions by governments across the world.

But, to caricature the claims of some on the left, there is an idea that it was governments that created the consumer electronic products we all rely on, and private industry has simply collected the profits. This view doesn’t recognise the massive efforts private industry has made, spending huge sums on the research and development needed to perfect manufacturing processes and bring them to market. Taking the USA alone, in 2022 US the government spent $6 billion on semiconductor R&D, compared to private industry’s $50.2 billion.

The semiconductor industry emerged in the 1960s in the USA, and in its early days more than half of its sales were to the US government. This was an early example of what we would now call “mission driven” innovation, motivated by a “moonshot project”. The “moonshot project” of the 1960s was driven by a very concrete goal – to be able to drop a half-tonne payload anywhere on the earth’s surface, with a precision measured in hundreds of meters.

Semiconductors were vital to achieve this goal – the first mass-produced computers based on integrated circuits were developed as the guidance systems of Minuteman intercontinental ballistic missiles. Of course, despite its military driving force, this “moonshot” produced important spin-offs – the development of space travel to the point at which a series of manned missions to the moon were possible, and increasing civilian applications of the more much cheaper, more powerful and more reliable computers that solid-state electronics made possible.

The USA is where the semiconductor industry started, but it played a central role in three East Asian development miracles. The first to exploit this new technology was Japan. While the USA was exploiting the military possibilities of semiconductors, Japan focused on their application in consumer goods.

By the early 1980’s, though, Japanese companies were producing memory chips more efficiently than the USA, while Nikon took a leading position in the photolithography equipment used to make integrated circuits. In part the Japanese competitive advantage was driven by their companies’ manufacturing prowess and their attentiveness to customer needs, but the US industry complained, not entirely without justification, that their success was built on the theft of intellectual property, access to unfairly cheap capital, the protection of home markets by trade barriers, and government funded research consortia bringing together leading companies. These are recurring ingredients of industrial policy as executed by East Asian developmental states, first executed successfully in Taiwan and in Korea, and now being applied on a continental scale by China.

An increasingly paranoid USA’s response to this threat from Japan to its technological supremacy in semiconductors was to adopt some industrial strategy measures itself. The USA relaxed its stringent anti-trust laws to allow US companies to collaborate in R&D through a consortium called SEMATECH, half funded by the federal government. Sematech was founded in 1987, and in the first 5 years of its operation was supported by $500 m of Federal funding, leading to some new self-confidence for the US semiconductor industry.

Meanwhile both Korea and Taiwan had identified electronics as a key sector through which to pursue their export-focused development strategies. For Taiwan, a crucial institution was the Industrial Technology Research Institute, in Hsinchu. Since its foundation in 1973, ITRI had been instrumental in supporting Taiwan’s industrial base in moving closer to the technology frontier.

In 1985 the US-based semiconductor executive Morris Chang was persuaded to lead ITRI, using this position to create a national semiconductor industry, in the process spinning out the Taiwan Semiconductor Manufacturing Company. TSMC was founded as a pure-play foundry, contract manufacturing integrated circuits designed by others and focusing on optimising manufacturing processes. This approach has been enormously successful, and has led TSMC to its globally leading position.

Over the last decade, China has been aggressively promoting its own semiconductor industry. The 2015 “Made in China 2025” identified semiconductors as a key sector for the development of a high tech manufacturing sector, setting the target of 70% self-sufficiency by 2025, and a dominant position in global markets by 2045.

Cheap capital for developing semiconductor manufacturing was provided through the state-backed National Integrated Circuit Industry Investment Fund, amounting to some $47 bn (though it seems the record of this fund has been marred by corruption allegations). The 2020 directive “Several Policies for Promoting the High-quality Development of the Integrated Circuit Industry and Software Industry in the New Era” reinforced these goals with a package of measures including tax breaks, soft loans, R&D and skills policies.

While the development of the semiconductor industry in Taiwan and Korea was generally welcomed by policy-makers in the West, a changing geopolitical climate has led to much more anxiety about China’s aspirations. The USA has responded by an aggressive programme of bans on the exports of semiconductor manufacturing tools, such as high end lithography equipment, to China, and has persuaded its allies in Japan and the Netherlands to follow suit.

Industrial policy in support of the semiconductor industry hasn’t been restricted to East Asia. In Europe a key element of support has been the development of research institutes bringing together consortia of industries and academia; perhaps the most notable of these is IMEC in Belgium, while the cluster of companies that formed around the electronics company Phillips in Eindhoven now includes the dominant player in equipment for extreme UV lithography, AMSL.

In Ireland, policies in support of inward investment, including both direct and indirect financial inducements, and the development of institutions to support skills innovation, persuaded Intel to base their European operations in Ireland. This has resulted in this small, formerly rural, nation becoming the second largest exporter of integrated circuits in Europe.

In the UK, government support for the semiconductor industry has gone through three stages. In the postwar period, the electronics industry was a central part of the UK’s Cold War “Warfare State”, with government institutions like the Royal Signals and Radar Establishment at Malvern carrying out significant early research in compound semiconductors and optoelectronics.

The second stage saw a more conscious effort to support the industry. In the mid-to-late 1970’s, a realisation of the potential importance of integrated circuits coincided with a more interventionist Labour government. The government, through the National Enterprise Board, took a stake in a start-up making integrated circuits in South Wales, Inmos. The 1979 Conservative government was much less interventionist than its predecessor, but two important interventions were made in the early 1980’s.

The first was the Alvey Programme, a joint government/private sector research programme launched in 1983. This was an ambitious programme of joint industry/government research, worth £350m, covering a number of areas in information and communication technology. The results of this programme were mixed; it played a significant role in the development of mobile telephony, and laid some important foundations for the development of AI and machine learning. In semiconductors, however, the companies it supported, such as GEC and Plessey, were unable to develop a lasting competitive position in semiconductor manufacturing and no longer survive.

The second intervention arose from a public education campaign ran by the BBC; a small Cambridge based microcomputer company, Acorn, won the contract to supply BBC-branded personal computers in support of this programme. The large market created in this way later gave Acorn the headroom to move into the workstation market with reduced instruction set computing architectures, from which was spun-out the microprocessor design house ARM.

In the third stage, the UK government adopted a market fundamentalist position. This involved a withdrawal from government support for applied research and the run-down of government laboratories like RSRE, and a position of studied indifference about the acquisition of UK technology firms by overseas rivals. Major UK electronics companies, such as GEC and Plessey, collapsed following some ill-judged corporate misadventures. Inmos was sold, first to Thorn, then to the Franco- Italian group, SGS Thomson. Inmos left a positive legacy, with many who had worked there going on to participate in a Bristol based cluster of semiconductor design houses. The Inmos manufacturing site survives as Newport Wafer Fab, currently owned by the Dutch-based, Chinese owned company Nexperia, though its future is uncertain following a UK government ruling that Nexperia should divest its shareholding on national security grounds.

This focus on the role of interventions by governments across the world at crucial moments in the development of the industry shouldn’t overshadow the huge investments in R&D made by private companies around the world. A sense of the scale of these investments is given by the figure below.

R&D expenditure in the microelectronics industry, showing Intel’s R&D expenditure, and a broader estimate of world microelectronics R&D including semiconductor companies and equipment manufacturers. Data from the “Are Ideas Getting Harder to Find?” dataset on Chad Jones’s website. Inflation corrected using the US GDP deflator.

The exponential increase in R&D spending up to 2000 was driven by a similarly exponential increase in worldwide semiconductor sales. In this period, there was a remarkable virtuous circle of increasing sales, leading to increasing R&D, leading in turn to very rapid technological developments, driving further sales growth. In the last two decades, however, growth in both sales and in R&D spending has slowed down


Global semiconductor sales in billions of dollars. Plot from “Quantum Computing: Progress and Prospects” (2019), National Academies Press, which uses data from the Semiconductor Industry Association.

Possible futures for the semiconductor industry

The rate of technological progrèss in integrated circuits between 1984 and 2003 was remarkable and unprecedented in the history of technology. This drove an exponential increase in microprocessor computing power, which grew by more than 50% a year. This growth arose from two factors, As is well-known, the number of transistors on a silicon chip grew exponentially, as predicted by Moore’s Law. This was driven by many unsung, but individually remarkable, technological innovations in lithography (to name just a couple of examples, phase shift lithography, and chemically amplified resists), allowing smaller and smaller features to be manufactured.

The second factor is less well known – by a phenomenon known as Dennard scaling, as transistors get smaller they operate faster. Dennard scaling reached its limit around 2004, as the heat generated by microprocessors became a limiting factor. After 2004, microprocessor computer power increased at a slower rate, driven by increasing the number of cores and parallelising operations, resulting in rates of increase around 23% a year. This approach itself ran into diminishing returns after 2011.

Currently we are seeing continued reductions in feature sizes, together with new transistor designs, such as finFETs, which in effect allow more transistors to be fitted into a given area by building them side-on. But further increases in computer power are increasingly being driven by optimising processor architectures for specific tasks, for example graphical processing units and specialised chips for AI, and by simply multiplying the number of microprocessors in the server farms that underlie cloud computing.

Slowing growth in computer power. The growth in processor performance since 1988. Data from figure 1.1 in Computer Architecture: A Quantitative Approach (6th edn) by Hennessy & Patterson.

It’s remarkable that, despite the massive increase in microprocessor performance since the 1970’s, and major innovations in manufacturing technology, the underlying mode of operation of microprocessors remains the same. This is known by the shorthand of CMOS, for Complementary Metal Oxide Semiconductor. Logic gates are constructed from complementary pairs of field effect transistors consisting of a channel in heavily doped silicon, whose conductance is modulated by the application of an electric field across an insulating oxide layer from a metal gate electrode.

CMOS isn’t the only way of making a logic gate, and it’s not obvious that it is the best one. One severe limitation on our computing is its energy consumption. This matters at a micro level; the heat generated by a laptop or mobile phone is very obvious, and it was problems of heat dissipation that underlay the slowdown in the growth in microprocessor power around 2004. It’s also significant at a global level, where the energy used by cloud computing is becoming a significant share of total electricity consumption.

There is a physical lower limit to the energy that computing uses – this is the Landauer limit on the energy cost of a single logical operation, a consequence of the second law of thermodynamics. Our current technology consumes more than three orders of magnitude more energy than is theoretically possible, so there is room for improvement. Somewhere in the universe of technologies that don’t exist, but are physically possible, lies a superior computing technology to CMOS.

Many alternative forms of computing have been tried out in the laboratory. Some involve different materials to silicon: compound semiconductors or new forms of carbon like nanotubes and graphene. In some, the physical embodiment of information is, not electric charge, but spin. The idea of using individual molecules as circuit elements – molecular electronics – has a long and somewhat chequered history. None of these approaches has yet made a significant commercial impact; incumbent technologies are always hard to displace. CMOS and its related technologies amount to a deep nanotechnology implemented at a massive scale; the huge investment in this technology has in effect locked us into a particular technology path.

There are alternative, non-semiconductor based, computing paths that are worth mentioning, because they may become important in the future. One is to copy biology; our own brains deliver enormous computing power at remarkably low energy cost, with an architecture that is very different from the von Neumann architecture that human-built computers follow, and a basic unit that is molecular. Various radical approaches to computing take some inspiration from biology, whether that is the new architectures for CMOS that underlie neuromorphic computing, or entirely molecular approaches based on DNA.

Quantum computing, on the other hand, offers the potential for another exponential leap forward in computing power – in principle. Many practical barriers remain before this potential can be turned into practise, however, and this is a topic for another discussion. Suffice it to say that, on a timescale of a decade or so, quantum computers will not replace conventional computers for anything more than some niche applications, and in any case they are likely to be deployed in tandem with conventional high performance computers, as accelerators for specific tasks, rather than as general purpose computers.

Finally, I should return to the point that semiconductors aren’t just valuable for computing; the field of power electronics is likely to become more and more important as we move to a net zero energy system. We will need a much more distributed and flexible energy grid to accommodate decentralised renewable sources of electricity, and this needs solid-state power electronics capable of handling very high voltages and currents – think of replacing house-size substations by suitcase-size solid-state transformer. Widespread uptake of electric vehicles and the need for widely available rapid charging infrastructures will place further demands on power electronics. Silicon is not suitable for these applications, which require wide-band gap semiconductors such as diamond, silicon carbide and other compound semiconductors.

Sources

Chip War: The Fight for the World’s Most Critical Technology, by Chris Miller, is a great overview of the history of this technology.

Semiconductors in the UK: Searching for a strategy. Geoffrey Owen, Policy Exchange, 2022. Very good on the history of the UK industry.

To Every Thing There is a Season – lessons from the Alvey Programme for Creating an Innovation Ecosystem for Artificial Intelligence, by Luke Georghiou. Reflections on the Alvey Programme by one of the researchers who carried out its official evaluation.

Are Ideas getting hard to find, Bloom, Jones, van Reenan and Webb. American Economic Review (2020). An influential paper on diminishing rates of return on R&D, taking the semiconductor industry as a case study.

Quantum Computing: Progress and Prospects (2019), National Academies Press.

Up next: What should the UK do about semiconductors? Part 3: towards a UK semiconductor strategy

Science and innovation policy for hard times

This is the concluding section of my 8-part survey of the issues facing the UK’s science and innovation system, An Index of Issues in UK Science and Innovation Policy.

The earlier sections were:
1. The Strategic Context
2. Some Overarching Questions
3. The Institutional Landscape
4. Science priorities: who decides?
5. UK Research and Innovation
6. UK Government Departmental Research
7. Horizon Europe (and what might replace it) and ARIA

8.1. A “science superpower”? Understanding the UK’s place in the world.

The idea that the UK is a “science superpower” has been a feature of government rhetoric for some time, most recently repeated in the Autumn Statement speech. What might this mean?

If we measure superpower status by the share of world resources devoted to R&D (both public and private) by single countries, there are only two science superpowers today – the USA and China, with a 30% and 24% share of science spending (OECD MSTI figures for 2019 adjusted for purchasing power parity, including all OECD countries plus China, Taiwan, Russia, Singapore, Argentina and Romania). If we take the EU as a single entity, that might add a third, with a 16% share (2019 figure, but excluding UK). The UK’s share is 2.5% – thus a respectable medium size science power, less than Japan (8.2%) and Korea (4.8%), between France (3.1%) and Canada (1.4%).

It’s often argued, though, that the UK achieves better results from a given amount of science investment than other countries. The primary outputs of academic science are scientific papers, and we can make an estimate of a paper’s significance by asking how often it is cited by other papers. So another measure of the UK’s scientific impact – the most flattering to the UK, it turns out – is to ask what fraction of the world’s most highly cited papers originate from the UK.

By this measure, the two leading scientific superpowers are, once again, the USA and China, with 32% and 24% shares respectively; on this measure the EU collectively, at 29%, does better than China. The UK scores well by this measure, at 13.4%, doing substantially better than higher spending countries like Japan (3.1%) and Korea (2.7%).

A strong science enterprise – however measured – doesn’t necessarily by itself translate into wider kinds of national and state power. Before taking the “science superpower” rhetoric serious we need to ask how these measures of scientific activity and scientific activity translate into other measures of power, hard or soft.

Even though measuring the success of our academic enterprise by its impact on other academics may seem somewhat self-referential, it does have some consequences in supporting the global reputation of the UK’s universities. This attracts overseas students, in turn bringing three benefits: a direct and material economic contribution to the balance of payments, worth £17.6 bn in 2019, a substantial subsidy to the research enterprise itself, and, for those students who stay, a source of talented immigrants who subsequently contribute positively to the economy.

The transnational nature of science is also significant here; having a strong national scientific enterprise provides a connection to this wider international network and strengthens the nation’s ability to benefit from insight and discoveries made elsewhere.

But how effective is the UK at converting its science prowess into hard economic power? One measure of this is the share of world economic value added in knowledge and technology intensive businesses. According to the USA’s NSF, the UK’s share of value added in this set of high productivity manufacturing and services industries that rely on science and technology is 2.6%. We can compare this with the USA (25%), China (25%), and the EU (18%). Other comparator countries include Japan (7.9%), Korea (3.7%) and Canada (1.2%).

Does it make sense to call the UK a science superpower? Both on the input measure of the fraction of the world’s science resources devoted to science, and on the size of the industry base this science underpins, the UK is an order of magnitude smaller than the world leaders. In the historian David Edgerton’s very apt formulation, the UK is a large Canada, not a small USA.

Where the UK does outperform is in the academic impact of its scientific output. This does confer some non-negligible soft power benefits of itself. The question to ask now is whether more can be done to deploy this advantage to address the big challenges the nation now faces.

8.2. The UK can’t do everything

The UK’s current problems are multidimensional and its resources are constrained. With less than 3% of the world’s research and development resources, no matter how effectively these resources are deployed, the UK will have to be selective in the strategic choices it makes about research priorities.

In some areas, the UK may have some special advantages, either because the problems/opportunities are specific to the UK, or because history has given the UK a comparative advantage in a particular area. One example of the former might be the development of technologies for exploiting deep-water floating offshore wind power. In the latter category, I believe the UK does retain an absolute advantage in researching nuclear fusion power.

In other areas, the UK will do best by being part of larger transnational research efforts. At the applied end, these can be in effect led by multinational companies with a significant presence in the UK. Formal inter-governmental collaborations are effective in areas of “big science” – which combine fundamental science goals with large scale technology development. For example, in high energy physics the UK has an important presence in CERN, and in radio astronomy the Square Kilometer Array is based in the UK. Horizon Europe offered the opportunity to take part in trans-European public/private collaborations on a number of different scales, and if the UK isn’t able to associate with Horizon Europe other ways of developing international collaborations will have to be built.

But there will remain areas of technology where the UK has lost so much capability that the prospect of catching up with the world frontier is probably unrealistic. Perhaps the hardware side of CMOS silicon technology is in this category (though significant capability in design remains).

8.3. Some pitfalls of strategic and “mission driven” R&D in the UK

One recently influential approach to defining research priorities links them to large-scale “missions”, connected to significant areas of societal need – for example, adapting to climate change, or ensuring food security. This has been a significant new element in the design of the current EU Horizon Programme (see EU Missions in Horizon Europe).

For this approach to succeed, there needs to be a match between the science policy “missions” and a wider, long term, national strategy. In my view, there also needs to be a connection to the specific and concrete engineering outcomes that are needed to make an impact on wider society.

In the UK, there have been some moves in this direction. The research councils in 2011 collectively defined six major cross-council themes (Digital Economy; Energy; Global Food Security; Global Uncertainties; Lifelong Health and Wellbeing; Living with Environmental Change), and steered research resources into (mostly interdisciplinary) projects in these areas. More recently, UKRI’s Industrial Strategy Challenge Fund was funded from a “National Productivity Investment Fund” introduced in the 2016 Autumn Statement and explicitly linked to the Industrial Strategy.

These previous initiatives illustrate three pitfalls of strategic or “mission driven” R&D policy.

  • The areas of focus may be explicitly attached to a national strategy, but that strategy proves to be too short-lived, and the research programmes it inspires outlive the strategy itself. The Industrial Strategy Challenge Fund was linked to the 2017 Industrial Strategy, but this strategy was scrapped in 2021, despite the fact that the government was still controlled by the same political party.
  • Research priorities may be connected to a lasting national priority, but the areas of focus within that priority are not sufficiently specified. This leads to a research effort that risks being too diffuse, lacking a commitment to a few specific technologies and not sufficiently connected to implementation at scale. In my view, this has probably been the case in too much research in support of low-carbon energy.
  • In the absence of a well-articulated strategy from central government, agencies such as Research Councils and Innovate UK guess what they think the national strategy ought to be, and create programmes in support of that guess. This then risks lacking legitimacy, longevity, and wider join-up across government.

In summary, mission driven science and innovation policy needs to be informed by carefully thought through national strategy that commands wide support, is applied across government, and is sustained over the long-term.

8.4. Getting serious about national strategy

The UK won’t be able to use the strengths of its R&D system to solve its problems unless there is a settled, long-term view about what it wants to achieve. What kind of country does the UK want to be in 2050? How does it see its place in the world? In short, it needs a strategy.

A national strategy needs to cut across a number of areas. There needs to be an industrial strategy, about how the country makes a living in the world, how it ensures the prosperity of its citizens and generates the funds needed to pay for its public services. An energy strategy is needed to navigate the wrenching economic transition that the 2050 Net Zero target implies. As our health and social care system buckles under the short-term aftermath of the pandemic, and faces the long-term challenge of an ageing population, a health and well-being strategy will be needed to define the technological and organisational innovation needed to yield an affordable and humane health and social care system. And, after the lull that followed the end of the cold war, a strategy to ensure national security in an increasingly threatening world must return to prominence.

These strategies need to reflect the real challenges that the UK faces, as outlined in the first part of this series. The goals of industrial strategy must be to restore productivity growth and to address the UK’s regional economic imbalances. Innovation and skills must be a central part of this, and given the condition large parts of the UK find themselves in, there need to be conscious efforts to rebuild innovation and manufacturing capacity in economically lagging regions. There needs to be a focus on increasing the volume of high value exports (both goods and services) that are competitive on world markets. The goal here should be to start to close the balance of payments gap, but in addition international competitive pressure will also bring productivity improvements.

An energy strategy needs to address both the supply and demand side to achieve a net zero system by 2050, and to guarantee security of supply. It needs to take a whole systems view at the outset, and to be discriminating in deciding which aspects of the necessary technologies can be developed in the UK, and which will be sourced externally. Again, the key will be specificity. For example, it is not enough to simply promote hydrogen as a solution to the net zero problem – it’s a question of specifying how it is made, what it is used for, and identifying which technological problems are the ones that the UK is in a good position to focus on and benefit from, whether that might be electrolysis, manufacture of synthetic aviation fuel, or whatever.

A health and well-being strategy needs to clarify the existing conceptual confusion about whether the purpose of a “Life Sciences Strategy” is to create high value products for export, or to improve the delivery of health and social care services to the citizens of the UK. Both are important, and in a well-thought through strategy each can support the other. But they are distinct purposes, and success in one does not necessarily translate to success in the other.

Finally, a security strategy should build on the welcome recognition of the 2021 Integrated Review that UK national security needs to be underpinned by science and technology. The traditional focus of security strategy is on hard power, and this year’s international events remind us that this remains important. But we have also learnt that the resilience of the material base of economy can’t be taken for granted. We need a better understanding of the vulnerabilities of the supply chains for critical goods (including food and essential commodities).

The structure of government leads to a tendency for strategies in each of these areas to be developed independently of each other. But it’s important to understand the way these strategies interact with each other. We won’t have any industry if we don’t have reliable and affordable low carbon energy sources. Places can’t improve their economic performance if large fractions of their citizens can’t take part in the labour market due to long-term ill-health. Strategic investments in the defence industry can have much wider economic spillover benefits.

For this reason it is not enough for individual strategies to be left to individual government departments. Nor is our highly centralised, London-based government in a position to understand the specific needs and opportunities to be found in different parts of the country – there needs to be more involvement of devolved nation and city-region governments. The strategy needs to be truly national.

8.5. Being prepared for the unexpected

Not all science should be driven by a mission-driven strategy. It is important to maintain the health of the basic disciplines, because this provides resilience in the face of unwelcome surprises. In 2019, we didn’t realise how important it would be to have some epidemiologists to turn to. Continuing support for the core disciplines of physical, biological and medical science, engineering, social science and the humanities should remain a core mission of the research councils, the strength of our universities is something we should preserve and be proud of, and their role in training the researchers of the future will remain central.

Science and innovation policy also needs to be able to create the conditions that produce welcome surprises, and then exploit them. We do need to be able to experiment in funding mechanisms and in institutional forms. We need to support creative and driven individuals, and to recognise the new opportunities that new discoveries anywhere in the world might offer. We do need to be flexible in finding ways to translate new discoveries into implemented engineering solutions, into systems that work in the world. This spirit of experimentation could be at the heart of the new agency ARIA, while the rest of the system should be flexible enough to adapt and scale up any new ways of working that emerge from these experiments.

8.7 Building a national strategy that endures

A national strategy of the kind I called for above isn’t something that can be designed by the research community; it needs a much wider range of perspectives if, as is necessary, it’s going to be supported by a wide consensus across the political system and wider society. But innovation will play a key role in overcoming our difficulties, so there needs to be some structure to make sure insights from the R&D system are central to the formulation and execution of this strategy.

The new National Science and Technology Council, supported by the Office for Science and Technology Strategy, could play an important role here. Its position at the heart of government could give it the necessary weight to coordinate activities across all government departments. It would be a positive step if there was a cross-party commitment to keep this body at the heart of government; it was unfortunate that with the Prime Ministerial changes over the summer and autumn the body was downgraded and subsequently restored. To work effectively its relationships with the Government Office for Science, the Council for Science and Technology need to be clarified.

UKRI should be able to act as an important two-way conduit between the research and development community and the National Science and Technology Council. It should be a powerful mechanism for conveying the latest insights and results from science and technology to inform the development of national strategy. In turn, its own priorities for the research it supports should be driven by that national strategy. To fulfil this function, UKRI will be have to develop the strategic coherence that the Grant Review has found to be currently lacking.

The 2017 Industrial Strategy introduced the Industrial Strategy Council as an advisory body; this was abruptly wound up in 2021. There is a proposal to reconstitute the Industrial Strategy Council as a statutory body, with a similar status, official but independent of government, to the Office of Budgetary Responsibility or the Climate Change Committee. This would be a positive way of subjecting policy to a degree of independent scrutiny, holding the government of the day to account, and ensuring some of the continuity that has been lacking in recent years.

8.8 A science and innovation system for hard times

Internationally, the last few years have seen a jolting series of shocks to the optimism that had set in after the end of the cold war. We’ve had a worldwide pandemic, there’s an ongoing war in Europe involving a nuclear armed state, we’ve seen demonstrations of the fragility of global supply chains, while the effects of climate change are becoming ever more obvious.

The economic statistics show decreasing rates of productivity growth in all developed countries; there’s a sense of the worldwide innovation system beginning to stall. And yet one can’t fail to be excited by rapid progress in many areas of technology; in artificial intelligence, in the rapid development and deployment of mRNA vaccines, in the promise of new quantum technologies, to give just a few examples. The promise of new technology remains, yet the connection to the economic growth and rising living standards that we came to take for granted in the post-war period seems to be broken.

The UK demonstrates this contrast acutely. Despite some real strengths in its R&D system, its economic performance has fallen well behind key comparator nations. Shortcomings in its infrastructure and its healthcare system are all too obvious, while its energy security looks more precarious than for many years. There are profound disparities in regional economic performance, which hold back the whole country.

If there was ever a time when we could think of science as being an ornament to a prosperous society, those times have passed. Instead, we need to think of science and technology as the means by which our society becomes more prosperous and secure – and adapt our science and technology system so it is best able to achieve that goal.

From self-stratifying films to levelling up: A random walk through polymer physics and science policy

After more than two and a half years at the University of Manchester, last week I finally got round to giving an in-person inaugural lecture, which is now available to watch on Youtube. The abstract follows:

How could you make a paint-on solar cell? How could you propel a nanobot? Should the public worry about the world being consumed by “grey goo”, as portrayed by the most futuristic visions of nanotechnology? Is the highly unbalanced regional economy of the UK connected to the very uneven distribution of government R&D funding?

In this lecture I will attempt to draw together some themes both from my career as an experimental polymer physicist, and from my attempts to influence national science and innovation policy. From polymer physics, I’ll discuss the way phase separation in thin polymer films is affected by the presence of surfaces and interfaces, and how in some circumstances this can result in films that “self-stratify” – spontaneously separating into two layers, a favourable morphology for an organic solar cell. I’ll recall the public controversies around nanotechnology in the 2000s. There were some interesting scientific misconceptions underlying these debates, and addressing these suggested some new scientific directions, such as the discovery of new mechanisms for self-propelling nano- and micro- scale particles in fluids. Finally, I will cover some issues around the economics of innovation and the UK’s current problems of stagnant productivity and regional inequality, reflecting on my experience as a scientist attempting to influence national political debates.