Taking Anglofuturism seriously

Regular readers of this blog won’t need reminding that the UK is in a stagnant bind, with economic measures like productivity and GDP per person flatlining since the global financial crisis (or earlier). The consequences are felt well beyond these arid economic aggregates; wage growth has slowed down, successive governments find it hard to fund acceptable public services, and there’s a palpable sour sense of malaise in our politics.

One interesting response to this has been the emergence of a loose constellation of commentators, activists and pressure groups, a techno-optimist movement calling for more houses to be built, for the barriers apparently stopping the country building infrastructure to be swept away, for cheaper and more abundant energy.

Britain Remade wants to “reform the planning process to deliver more clean energy projects, transport infrastructure, and new good quality housing at speed”, while the yimby Alliance , as keen subscribers to the “Housing theory of everything”, focus on the need to build more houses. A very widely talked about paper, Foundations: Why Britain has stagnated focuses on housing, infrastructure, and the cost of energy. Rian Chad Whitton likewise focuses on high energy prices, connecting this with the decline of the UK’s manufacturing base.UKDayOne focus on science, innovation and technology as the motor for UK growth and prosperity, particularly emphasising AI and nuclear power.

I’m going to follow Tom Ough and Calum Drysdale in gathering these strands together under the banner “Anglofuturism”. Their eponymous, and interesting, podcast embraces a cheerful and optimistic version of this vision, with its whimsical AI generated illustrations of flying pubs and thatched space stations.

But I believe the term (in its current manifestation, at least) was coined by the journalist Aris Roussinos, in rather darker hues. This was a call for rebuilt state capacity in a definitively post-liberal world, a vision that owed less to Adam Smith, and more to Thomas Hobbes, which some readers might think more appropriate to deteriorating geopolitical situation we face.

I don’t think there is an entirely consistent underlying political ideology here, but I think it’s fair to say that there’s a common centre of gravity on the centre right. This isn’t the place to analyse political antecedents or implications, and I’m not the right person to do that, but I do want to make some remarks about this emerging movement.

There is much in this agenda that I applaud and agree with. The UK needs to get back to productivity growth, and there is no fundamental reason why that shouldn’t happen. We haven’t reached some final technological barrier – far from it. And I think there’s a profoundly humanistic perspective at work here – people should be able to enjoy the fruits of prosperity.

Of course, there is an opposing argument that believes that continued economic growth is inconsistent with planetary limits. It’s clear that we need to move to a new model of economic growth that doesn’t impose externalities on the global environment, and in particular we need to shift our energy economy to one that doesn’t depend on fossil fuels. But to embrace “degrowth” is in my view both politically infeasible and, if sufficient will and resources are applied, technologically unnecessary. To put it another way, the last 15 years in the UK have been an experiment in degrowth, and the results have been ugly.

There’s an undercurrent of generational justice here too. The perception that young people in the UK can’t look forward to the same lifestyle as their parents is profoundly depressing. Nowhere is this more obvious than in the unaffordability of housing.

Where I think these analyses are less convincing is in identifying the origins of our current problems. In particular, I think an explanation of our current productivity stagnation needs to account for its timing. It’s certainly convincing to argue, as these authors do, that we would be better off if the UK had built more infrastructure over the last few decades, but I don’t think they really convince in talking about what conditions would have produced that outcome. Anglofuturism, in all its varieties, could be accused of willing worthy ends, without really specifying the means.


Labour productivity in the UK since the Industrial Revolution. Data from the Bank of England A millennium of macroeconomic data dataset, plot & fits by the author.

The Foundations paper puts a lot of blame on the 1947 Town and Country Planning Act – and the wider Attlee settlement. But I don’t think this makes sense in terms of the timing. As my figure shows, the period of fastest productivity growth in the entire history of the UK took place between 1948 and 1972. In fact, Roussinos harks back to this period, referring to “the optimism and high modernism of the post-war era, a vanished world of frenetic housebuilding and technological innovation where British scientific research could lead the world, and produce higher living standards through its fusion with well-paid, high-skilled labour.”


Labour productivity in the UK since 1970. ONS data, fit by the author. For the rationale for putting the break around 2005, see When did the UK’s productivity slowdown begin?

What needs to be explained is that the current slowdown began in the mid-2000s. There is some overlap with a developing consensus view from mainstream economics that the immediate problem has been a lack of investment in the UK economy (see e.g. The Productivity Agenda). This includes public investment in hard infrastructure, private investment in capital goods, and investment in intangibles like R&D. In my own work I’ve emphasised the significant reduction in the R&D intensity of the UK economy between 1980 and 2005, and given the generally technocentric flavour of the Anglofuturists, I’m surprised that this aspect isn’t more prominent in their arguments.


From Research, innovation and the R&D landscape, by R.A.L. Jones, in The Productivity Agenda.

Even if one agrees that investment levels have been too low, there isn’t really a consensus about the ultimate cause of the lack of investment. One common thread is a sense that building infrastructure in the UK has become too expensive because of excessive regulation. In one sense, this is a reflection of the fact that the comparative advantage of the UK is to be found in professional services. One can celebrate that fact that the UK has become a “services superpower”, but the downside was caustically expressed in this comment from Dan Davies

Giles Wilkes has discussed what he terms the “crud economy” at a bit more length. Economic actors respond to incentives, and this doesn’t always direct activity towards where we need it. As Giles puts it: “We need vastly more clean energy, actual hard defence equipment for handling nasty rogue nations, the soldiers to use it, and much more numerous and productive care and health workers for the ageing population. Mitigating the dangerous effects of climate change is going to take real physical capital and effort. These are actual hard problems – and being able to produce more streaming videos, intelligent AI-related chat, or brilliant legal ‘solutions’ to financial market problems is not exchangeable for the assets we need for the real problems. Just because the lawyer’s fee is expressed in dollars, and so is the cost of transforming the US electricity system, doesn’t mean the two can get traded together.”

One thing all branches of Anglofuturism agree on is the need for abundant, cheap energy, and on the bad economic effects that current high industrial energy prices are causing. This clearly causes strong feelings, to judge by the violent on-line reaction to Tom Forth’s entirely reasonable, from a classical market liberal perspective, comments about this, arguing that, while this situation was not good, it was “a smaller problem and of a lower priority than many other restrictions on growth in Britain.”

I agree that it would be better if energy prices in the UK were lower, but I think it is important to understand how this situation has arisen. High industrial energy prices now are causing serious problems for what industry remains in the UK, but I don’t think they can be blamed for the UK’s greater degree of deindustrialisation that its neighbours. This took place at a time when energy prices were low and falling.

The decision the UK government made in the 1980s was that energy was just another commodity whose supply could be left to the market. As it happened, this coincided with a moment in time when the UK switched from being a net importer of energy, to being a net exporter, having found abundant supplies of natural gas and oil in the North Sea. North Sea oil and gas production peaked around 2000, and the country switched to being an energy importer again in 2004. The UK’s relative success in decarbonising its electricity supply initially relied on an early switch from coal to gas; even after the more recent expansion of offshore wind the price of electricity is set by the internationally traded price of gas. This was fine until it wasn’t – in the 2022 gas price spike.

If our problem is that we rely on imported gas, whose fluctuating price is beyond our control, together with offshore wind, which is necessarily intermittent (as well as being generated a long way from where it is needed, connected by an inadequate grid), would it not be better if a much higher proportion of our energy was generated by nuclear fission?

An enthusiasm for nuclear power is a common thread running through all strands of Anglofuturism, and it’s one with which I have much sympathy. For all the progress there’s been in renewable energy, in 2022, 77.8% of our energy still came from oil, gas and coal, and I think it’s going to be difficult to have a fossil-fuel free energy economy which doesn’t depend on some nuclear power to provide firm energy . I deeply regret the failure of the nuclear new build programme of recent governments – of the 18 GW of new generating capacity planned in 2014, only 3.2 GW is even under construction.

But I think it is important, and salutary, to understand why this failure has occurred. My recent blog posts go into the story of the UK’s civil nuclear power programme in some detail . There are ways in which the regulatory and planning framework for civil nuclear could be streamlined, but the fundamental problem with Hinkley C wasn’t the fish disco. It was the fact that the UK government wanted the Chinese state to pay for it, and the French state to build it, as the UK state no longer had the will or capacity to do either.

The UK’s own civil nuclear industry was killed in the 1990s; in an environment of high interest rates and low natural gas prices, and an ideological commitment to leave energy supply to the market, there was no place for it. I do think the UK should recreate its capacity to build nuclear power stations, including the small modular reactors that are currently attracting much attention, but I don’t think this will happen without substantial state intervention.

I agree with the Anglofuturists that we shouldn’t resign ourselves to our current economic failures. I think we need to ask ourselves what has gone wrong with the variety of capitalism that we have, that has led us to this stagnation. It’s a problem that’s not unique to the UK, but which seems to have affected the UK more seriously than most other developed countries. The slowdown seems to have begun in the 2000s, crystallising in full at the Global Financial Crisis.

This timing points to changes in the nature of capitalism and political economy that took hold in the decades after 1980, with the ascendancy of
market liberalism, the doctrine of shareholder value in corporate management, and an enthusiasm for outsourcing government functions to private contractors, no matter how central to the core purposes of the state they might appear to be. In the UK, even the Atom Weapons Establishment has been run by private contractors since 1989, with the government only taking ownership and control back from SERCO in 2020.

We have a new form of globalisation that followed from abolishing capital controls, together with a conviction that one doesn’t need to worry about the balance of payments, even though the persistent trade deficits the UK has run since then has meant ownership and control of national assets has moved overseas. We have a financial system that seems unable to direct resources to those activities that lead to long-term growth. We have a hollowed out state, that now lacks the capacity even to be an informed and effective contractor for services.

I agree with the Anglofuturists that our current stagnation isn’t inevitable, and I applaud their lack of defeatism. It doesn’t have to be this way – but to get beyond our current malaise, I think we need to ask some deeper questions about how our economy is run.

Revisiting the UK’s nuclear AGR programme: 3. Where next with the UK’s nuclear new build programme? On rebuilding lost capabilities, and learning wider lessons

This is the third and concluding part of a series of blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects.

In the second post, “What led to the AGR decision? On nuclear physics – and nuclear weapons” I turned to consider the technical and political issues that led to this decision.

In this post, I bring the story up to date, discussing why post-2010 plans for new nuclear build have largely failed, and look to the future, with new ambitions for small modular reactors – and, ironically, a potential return to high temperature, gas cooled reactors that represent an evolution of the AGR.

Into the 2010’s and beyond – the UK’s failed Nuclear New Build programme

In the early 2010’s, the Coalition Government developed an ambitious plan to replace the UK’s ageing nuclear fleet, with new light water reactors to be built on the existing nuclear sites, involving four different designs from four different vendors. The French state nuclear company was to build 2 of its next generation pressurised water reactors – the European Pressurised Water Reactor (EPR) – at Hinkley, and another 2 at Sizewell. The Chinese state nuclear corporation, CGN would install 2 (or possibly 3) of its own PWR designs at Bradwell. At Moorside, in Cumbria, Toshiba/Westinghouse would build 3 of its AP1000 PWRs. At Wylfa, in North Wales, Hitachi would build two Advanced Boiling Water Reactors, with another two ABWRs to be built at Oldbury. In total this would give 18 GW of new nuclear capacity, producing roughly double the output of the AGR fleet. In 2013, this programme formally got underway, with the announcement of a deal with EDF to deliver the first of these new plants, at Hinkley Point.

This programme has largely failed. A decade on, only one project is under construction – Hinkley Point C, where the best estimate for when the two EPRs will come into service is 2030. The cost for this 3.2 GW capacity is now estimated as being between £31 bn and £34 bn, in 2015 prices, compared to an original estimate of £20 bn. To put this into context, the last nuclear power station built in the UK, the PWR at Sizewell B, cost about £2 bn, in 1987 prices for a 1.2 GW unit. Scaling this to the 3.2 GW capacity of the Hinkley Point project, and accounting for inflation, this would correspond to about £12 bn in 2015 prices. Where has this 250% increase in nuclear construction cost since Sizewell B come from? There are essentially two broad classes of reasons.

Firstly, more recent designs of pressurised water reactor, such as the EPR, or the Westinghouse AP1000, have a number of new safety features, to mitigate some of the fundamental weaknesses of the pressurised water reactor design, particularly its vulnerability to loss of coolant accidents. These new features include methods for passive cooling in the case of loss of power to the main cooling system, a “core catcher” system which contains molten core material in the event of a meltdown, and more robust containment systems, designed to resist, for example, an aircraft crashing into the reactor building. These new features all add unavoidable extra cost.

In addition to these unavoidable cost increases, some of the increase in construction cost must reflect a substantial real reduction in the UK’s ability to deliver a big complex project like a nuclear power station. One would hope that, if subsequent power stations are built to the same design with the construction teams kept in place, in the light of experience, the development of functional supply chains, and the creation of a skilled workforce, these costs could be reduced.

A sister plant to Hinkley Point, at Sizewell, has received a nuclear site license, but awaits a final investment decision. The capital for Hinkley Point C was provided entirely by its investors, which included the French state-owned energy company EDF and the Chinese state nuclear company CGN, in return for a guarantee of a fixed price for the electricity the plant generated over the first 35 years of operation. Thus the cost of the overrun in budget is borne by the investors, not the UK government or UK consumers. The deal was constructed in a way that was very favourable to the investors, so there was some cushion there, but the experience of Hinkley Point C means that it’s now impossible to attract investors to build further power stations on these terms. The financing for Sizewell C, if it goes ahead, will involve more direct UK state investment, as well as payments to the company building it while the reactor is under construction. These up-front payments will be added to electricity consumers’ bills through the so-called “Regulated Asset Base” mechanism, reducing the cost to the company of borrowing money during the long construction period.

So, sixteen years on from the in-principle commitment to return to nuclear power, no plant has yet been completed, and the best that can be hoped for from the plan to build 18 GW of new capacity is that we will have 6.4 GW of capacity from Hinkley C, and Sizewell C, if the latter goes ahead.

Why has the UK’s nuclear new build programme failed so badly? The original plans were misconceived on many levels. The plan to involve the Chinese state so closely seemed naive at the time, and given the changed geopolitical environment since then, it now seems almost unbelievable that a UK government could countenance it. The idea of having multiple competing vendors and designs makes it much more difficult to drive costs down through “learning by doing”; the most successful build-outs of nuclear power – in France and Korea – have relied on “fleet build” – sequential installations of standardised designs. And the reliance on overseas investors and overseas designs meant that the UK had no control over the supply chain, meaning that little of the high value work involved in the programme would benefit the UK economy.

At the root of this failure were the UK government’s unwise ideological commitments to privatised energy markets, making it resist any subsidies for nuclear power, and refuse to issue new government debt to pay for infrastructure. The legacy of the run-down of the UK’s civil nuclear programme in the 1990’s was a lack of significant UK government expertise in the area, making it an uninformed and naive customer, and a lack of an industry in the UK in a position to benefit from the expenditure.

Could there be another way? Since 2014, the UK government has expressed interest in the idea of small modular reactors (SMRs), and has given some support for design studies, with the UK company Rolls-Royce setting up a unit to commercialise them.

Back to the future – hopes for light water small modular reactors

There’s been a seemingly inexorable trend towards larger and larger pressurised water reactors – and, as we have seen at Hinkley C, that trend of increasing size has been accompanied by a dismal record of cost overruns and construction delays. There are, in principle, economies of scale in operating costs to be gained with very large units. But, as I’ve stressed above, the economics of nuclear power is dominated by the upfront capital cost of building reactors in the first place. If one, instead, built multiple smaller reactors, small enough for much of the construction to take place in factories, where manufacturing processes could be optimised over multiple units, one might hope to drive the costs down through “learning by doing”. This is the logic behind the enthusiasm for small modular reactors.

There’s nothing new about a small pressurised water reactors – by the standards of today’s power reactors, Admiral Rickover’s submarine reactors were tiny. Significantly, as I discussed above, the only remaining UK capability in nuclear reactors is to be found in Rolls-Royce, the company that makes reactors for the UK Navy’s submarines. But the design criteria for a submarine reactor and for a power reactor are very different – while the experience of designing and manufacturing submarine reactors will have some general value in the civil sector, the design of a civil small modular reactor will need to be very different to a submarine reactor.

Rolls-Royce is one of five companies currently bidding for a role in a UK civil SMR programme. Its design has currently passed the second of three stages in the process of getting regulatory approval for the UK market. The Rolls-Royce proposal is for a 470 MWe pressurised water reactor, using conventional PWR fuel of low enrichment (in contrast to the very highly enriched fuel used in submarine reactors). The design is entirely new, though technically rather conservative.

A power output of 470 MWe is not, in fact, that small – this is very much in the range of reactor powers of civil PWRs that were being built in the early 1970’s – compare, for example, the VVER-440 reactors built by the USSR and widely installed and operating in the former USSR and Eastern Europe. The Rolls-Royce design, in contrast to the VVER-440s, does include the safety features to be found in the larger, recent PWR designs, including much more robust confinement, “core catcher”, and passive cooling to cope with a loss of coolant accident, and it will incorporate much more modern materials, control systems, and manufacturing technologies.

There have been suggestions that SMRs could be sited more widely across the country, in towns and cities outside regular nuclear sites. This isn’t the plan for any UK SMRs – they are in any case too large for this to make sense. Instead, the idea is to have multiple installations in existing licensed nuclear sites, such as Wylfa and Oldbury. The Rolls-Royce design is currently undergoing the final stage of its generic design approval. It is one of five potential vendors currently participating in a UK government competition for further support towards deployment of a light water small modular reactor in the UK.

The other entrants to the SMR competition are two well-established vendors of large light water reactors – Westinghouse and GE-Hitachi, and two more recent entrants into the market, from the USA – Holtec and NuScale. Since none of these companies has actually delivered an SMR, the decision will have to be made on judgements about capability: experience shows us that there can be no certainty about cost until one has been built. But, in making the decision, the UK government will need to decide how strongly to weight the need to rebuild UK industrial capacity and nuclear expertise against pure “value for money” criteria.

The Next Generation? Advanced Modular Reactors

The light water SMR represents an incremental update of a technology developed in the 1950’s, at a scale that was being widely deployed in the 1970’s. Is it possible to break out from the technological lock-in of the light water reactor, to explore more of the very wide possible design space of possible power reactors? That is the thinking behind the idea of developing an Advanced Modular Reactor – keeping the principle of relatively small scale and factory based modular construction, but using fundamentally different reactor designs, with different combinations of moderator and coolant to achieve technical advantage over the light water reactor. In particular, it would be very attractive to have a reactor that ran at a significantly higher temperature than a light water reactor. A high temperature reactor would have higher conversion efficiency to electrical power, and in addition it might be possible to use the heat directly to drive industrial processes – for example making hydrogen as an energy vector and as a non-oil based feedstock for the petrochemical industry, including to make synthetic hydrocarbons for zero carbon aviation fuel.

We are also seeing a resurgence of interest in reactors using unmoderated (fast) neutrons. This is partly motivated by the possibility of breeding fissile material, thus increasing the efficiency of fuel use, and partly by the fact that fast neutrons can induce fission in the higher actinides that are particularly problematic as contaminants of used nuclear fuel. There’s an attractive symmetry in the idea of using the UK’s very large stock of civil plutonium to “burn up” nuclear waste.

The UK government commissioned a technical assessment of potential candidates for an advanced modular reactor. This considered fast reactors cooled by liquid metals – both sodium and lead, as well as a gas-cooled fast reactor. Another intriguing possibility that has generated recent interest is the molten salt reactor, where the fissile material is dissolved in fluoride salts. Here the molten salt acts both as fuel and coolant. Reactor designs using a thermal neutron spectrum include an evolution of the boiling water reactor which uses water in the supercritical state. All of these designs have potential advantages, but the judgement of the study was that, of these potential designs, only the sodium fast reactor was potentially close enough to deployment to be worth considering.

However, the study made a clear recommendation in favour of a high temperature, gas cooled thermal neutron reactor. Here, the moderator is graphite and the coolant is helium, as in the Advanced Gas Cooled Reactors. The main difference with AGRs is that, in order to operate at higher temperatures, the fuel is presented in spherical particles around a millimetre in diameter, in which uranium oxide is coated with graphite and encapsulated in a high temperature resistant refractory ceramic such as silicon carbide. There is considerable world-wide experience in making this so-called tristructural isotropic (TRISO) fuel, which is able to withstand operating temperatures in the 700 – 850 °C range. Modifications of these fuel particles – for example using zirconium carbide as the outer later – could permit operation at even higher temperatures, high enough to split water into hydrogen and oxygen through purely thermochemical processes. But this would need further research.

A Chronicle of Wasted Time

What’s striking about many of the proposals for an advanced modular reactor is that the concepts are not new. For example, work on sodium cooled fast reactors began in the UK in the 1950s, with a full scale prototype being commissioned in 1974. Lead cooled reactors were built in both the USA and the USSR. Molten salt reactors perhaps represent the most radical design departure, but even here, a working prototype was developed in Oak Ridge National Laboratory, USA, in the 1960s.

One of the reasons for the UK AMR Technical Assessment favouring the High Temperature Gas Reactor is that it builds on the experience of the UK in running a fleet of gas cooled, graphite moderator reactors – the AGRs. In fact, the UK, as part of an international collaboration, operated a prototype high temperature gas reactor between 1964 and 1976 – DRAGON. It was in this project that the TRISO fuel concept was developed, which has since been used in operational high temperature gas reactors in the USA, Germany, Japan and China.

At the peak of the 1970’s energy crisis, from 1974 to 1976, construction began on more than a hundred nuclear reactors across the world. Enthusiasm for nuclear power dwindled throughout the 1980’s, suppressed on the one hand by the experience of nuclear accidents at Three Mile Island and Chernobyl, and on the other by an era of cheap and abundant fossil fuels. In the three years between 1994 to 1996, just three new reactors were begun worldwide. In this climate, there was no appetite for new approaches to nuclear power generation, technology development stagnated, and much tacit knowledge was lost.

Some concluding thoughts

In 1989, the UK’s Prime Minister Margaret Thatcher made an important speech to the United Nations highlighting the importance of climate change. It was her proposal that the work of the Intergovernmental Panel on Climate Change was extended beyond 1992, and that there should be binding protocols on the reduction of greenhouse gases; naturally, given her political perspective, she stressed the importance of generating continued economic growth, and of the importance of private sector industry in driving innovation. She reasserted her support for nuclear power, which she described as “the most environmentally safe form of energy”. As far as the UK was concerned, “we shall be looking more closely at the role of non-fossil fuel sources, including nuclear, in generating energy.”

Since Thatcher’s speech, another thousand billion tonnes of carbon dioxide have been released into the atmosphere from industry and burning fossil fuels, leading to an increase in the atmospheric concentration of CO2 from 350 parts per million in 1989 to 427 ppm now. To be fair, one should recognise that the worldwide nuclear power industry has produced 390,000 tonnes of spent nuclear fuel, producing 29,000 cubic meters of high level waste. This needs to be permanently disposed of in deep geological repositories, the first of which is nearing completion in Finland.

But even as Thatcher was speaking, the expansion of nuclear power was stalling. In the UK it was Thatcher’s own Chancellor of the Exchequer who had in effect killed nuclear power, through the lasting impact of his ideological commitment to privatised energy markets in an environment of cheap fossil fuels.

To be clear, what killed the UK’s nuclear energy programme was not a wrong choice of reactor design; it was a combination of high interest rates and low fossil fuel prices, all in the context of a worldwide retreat from nuclear new build, with a strong anti-nuclear movement, driven by nuclear accidents in Three Mile Island and Chernobyl, by the (correctly) perceived connection between civil nuclear power and nuclear weapons programmes, and by the problem of nuclear waste. The circumstances of the UK were particularly helpful for a continued dependence on fossil fuels; the discovery of North Sea oil and gas gave the UK, now a net energy exporter, a 15 year holiday from having to worry about the geopolitics of energy dependence.

But, for industrial nations, security of access to adequate energy supplies has always been an issue of existential importance, too often driving conflict and war. The Ukrainian war has given us a salutary reminder of the importance of energy supplies to geopolitics. Energy is never just another commodity.

The effective termination of the UK’s civil nuclear programme in the 1990’s undoubtedly saved money in the short-term. That money could have been used for investment – future-proofing the UK’s infrastructure, in supporting R&D to create new technologies. Political choices meant that it wasn’t – this was a period of falling public and private investment – instead it supported consumption. But there were costs, in terms of losing capacity, in industry and the state. Technological regression is possible, and one could argue that this has happened in civil nuclear power. In the UK, we have felt the loss of that capacity now that policy has changed, very directly in the failure of the last decade’s new nuclear build. Energy decisions should never just be about money.

Revisiting the UK’s nuclear AGR programme: 2. What led to the AGR decision? On nuclear physics – and nuclear weapons

This is the second of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government. In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects, and in particular, the influence of an article by Dennis Henderson that was highly critical of the AGR decision. In this post, I go into some detail to try to understand why the decision was made.

According to Thomas Kelsey, writing in his article When Missions Fail: Lessons in “High Technology” from post-war Britain, the decision to choose the Advanced Gas Cooled reactor design for the UK’s second generation reactor programme was forced through by “state technocrats, hugely influential scientists and engineers from the technical branches of the civil service”; sceptics did exist, but they were isolated in different departmental silos, and unable to coordinate their positions to present a compelling counter view.

But why might the scientists and engineers have been so convinced that the AGR was the right way to go, rather than the rival US designed Pressurised Water Reactor, making what Henderson argued, in his highly influential article “Two British Errors: Their Probable Size and Some Possible Lessons”, was one of the UK government’s biggest policy errors? To go some way to answering that, it’s necessary to consider both physics and history.

Understanding the decision to choose advanced gas cooled reactors: the physics underlying nuclear reactor design choices

To start with the physics, what are the key materials that make up a fission reactor, and what influences the choice of materials?

Firstly, one needs a fissile material, which will undergo a chain reaction – a nucleus, that when struck by a neutron, will split, releasing energy, and emitting a handful of extra neutrons, that go on to cause more fission. The dominant fissile material in today’s civil nuclear programmes is Uranium-235, the minority isotope that makes up 0.72% of natural uranium (the rest of it being uranium-238, which is mildly radioactive but not fissile). To make reactor fuel, one generally needs to “enrich” the uranium, increasing the concentration of U-235, typically, for civil purposes, to a few percent. Enrichment is a complex technology inextricably connected with nuclear weapons – the enrichment needed to make weapons grade uranium is different in degree, not kind, from that needed for civil power. One also needs to consider how the fissile material – the nuclear fuel – is to be packaged in the reactor.

Secondly, one needs a moderator. The neutrons produced in fission reactions are going too fast to be efficient at inducing further fissions, so they need to be slowed down. (As I’ll discuss below, it is possible to have a reactor without moderation – a so-called fast-neutron reactor. But because of the lower absorption cross-section for fast neutrons, this needs to use a much higher fraction of fissile material – highly enriched uranium or plutonium).

In a normal reactor, the purpose of the moderator is to slow down the neutrons. Moderators need to be made of a light element which doesn’t absorb neutrons too much. The main candidates are carbon (in the form of graphite), hydrogen (in the form of water) or deuterium, the heavier isotope of hydrogen (in the form of water). Hydrogen absorbs neutrons more than deuterium does, so it’s less ideal as a moderator, but is obviously much cheaper.

Finally, one needs a coolant, which takes away the heat the fission reactor produces, so the heat can be extracted and converted to electricity in some kind of turbine. The choice here, in currently operating reactors, is between normal water, heavy water, and a non-reactive gas (either carbon dioxide or helium). Experimental designs use more exotic cooling materials like molten salts and liquid metals.

So the fundamental design choice for a reactor is the choice of moderator and coolant – which dictate, to some extent, the nature of the fuel. The variety of possible combinations of moderators and coolants means that the space of possible reactor designs is rather large, but only a handful from this choice of potentials technologies is in widespread use. The most common choice is to use ordinary water as both coolant and moderator – in so-called light water reactors (“light water” in contrast to “heavy water”, in which the normal hydrogen of ordinary water is replaced by hydrogen’s heavier isotope, deuterium). Light water is an excellent coolant, cheap, and convenient to use to drive a steam turbine to generate electricity. But it’s not a great moderator – it absorbs neutrons, so a light water reactor needs to use enriched uranium as fuel, and the core needs to be relatively small.

These weren’t problems for the original use of pressurised water reactors (PWRs, the most common type of light water reactor. The other variety, Boiling Water Reactors, similarly uses light water as both coolant and moderator, the difference with PWRs being that steam is generated directly in the reactor core rather than in a secondary circuit). These were designed to power submarines, in a military context where enriched uranium was readily available, and where a compact size is a great advantage. But it underlies the great weakness of light water reactors – their susceptibility to what’s known as a “loss of coolant accident”. The problem is that, if for some reason the flow of cooling water is stopped, even if the chain reaction is quickly shut down (and this isn’t difficult to do) the fuel produces so much heat through its radioactive decay that it can melt the fuel rods, as happened in Three Mile Island. What’s worse, the alloy that the fuel rod is clad in can react with hot steam to produce hydrogen, that can explode, as happened at Fukushima.

In contrast to light water, heavy water is an excellent moderator. Although deuterium and (normal) hydrogen are (nearly) chemically identical, the interaction of neutrons with their nuclei is very different – deuterium absorbs neutrons much less than hydrogen. Heavy water is just as good a coolant as light water, so a reactor with heavy water as both moderator and coolant can be run with unenriched uranium oxide as fuel. The tradeoff, then, is the ability to do without a uranium enrichment plant, at the cost having to use expensive and hard to make heavy water in large quantities. This is the basis of the Canadian CANDU design.

Another highly effective moderator is graphite (if it’s of sufficiently high purity). But being a solid, a separate coolant is needed. The UK’s Magnox stations used carbon dioxide as a coolant and natural, unenriched uranium metal as a fuel; it was a development of this design that formed the Advanced Gas Cooled Reactor (AGR), which used lightly enriched uranium oxide as a fuel. The use of gas rather than water as the coolant makes it possible to run the reactor at a higher temperature, which allows a more efficient conversion of heat to electricity, while the lower neutron absorption of the moderator and coolant than for light water means that the core is less compact.

Another approach is to use graphite as the moderator, but to use light water as the coolant. The use of light water reduces the neutron efficiency of the design, so the fuel needs to be lightly enriched. This is the basis of the Soviet Union’s RBMK reactor. This design is cheap to build, but it has a very ugly potential failure mode. If the cooling water starts to boil, the bubbles of steam absorb fewer neutrons than the water they replace, and this means the efficiency of the chain reaction can increase, leading to a catastrophic runaway loss of control of the fission reaction. This is what happened at Chernobyl, the world’s worst nuclear accident to date.

Understanding the decision to choose advanced gas cooled reactors: the history of the UK nuclear weapons programme, and its influence on the civil nuclear programme

In the beginning, the purpose of the UK’s nuclear programme was to produce nuclear weapons – and the same can be said of other nuclear nations, USA and USSR, France and China, India and Pakistan, Israel and North Korea. The physics of the fission reaction imposes real constraints on the space of possible reactor designs – but history sets a path-dependence to the way the technology evolved and developed, and this reflects the military origins of the technology.

A nuclear weapon relies on the rapid assembly of a critical mass of a highly fissile material. One possible material is uranium – but since it’s only the minority Uranium-235 isotope that is fissile, it’s necessary to separate this from the Uranium-238 that constitutes 99.28% of the metal as it is found in nature. The higher the degree of enrichment, the smaller the critical mass required; in practise, enrichments over 60% are needed for a weapon. There is an alternative – to use the wholly artificial element plutonium. The fissile isotope plutonium-239 is formed when uranium-238 absorbs a neutron, most conveniently in a fission reactor.

As the history of nuclear weapons is usually told, it is the physicists who are usually given the most prominent role. But there’s an argument that the crucial problems to be overcome were as much ones of chemical engineering as physics. There is no chemical difference between the two uranium isotopes that need to be separated, so any process needs to rely on physical properties that depend on the tiny difference in mass between the two isotopes. On the other hand, to obtain enough plutonium to build a weapon, one needs not just to irradiate uranium in a reactor, but then use chemical techniques to extract the plutonium from a highly radioactive fuel element.

In 1941, the wartime UK government had concluded, based on the work of the so-called MAUD committee, that nuclear weapons were feasible, and began an R&D project to develop them – codenamed “Tube Alloys”. In 1943 the UK nuclear weapons programme was essentially subsumed by the Manhattan Project, but it was always the intention that the UK would develop nuclear weapons itself when the war ended. The pre-1943 achievements of Tube Alloys are often overlooked in the light of the much larger US programme, but one feature of it is worth pointing out. The UK programme was led by the chemical giant ICI; this was resented by the academic physicists who had established the principles by which nuclear weapons would work. However, arguably it represented a realistic appraisal of where the practical difficulties of making a weapon would lie – in obtaining sufficient quantities of the fissile materials needed. Tube Alloys pursued an approach to uranium enrichment based on the slightly different mass-dependent diffusion rates of uranium hexafluoride through porous membranes. This relied on the expertise in fluorine chemistry developed by ICI in Runcorn in the 1930’s, and came to fruition with the establishment of a full-scale gaseous diffusion plant in Capenhurst, Cheshire, in the late 40s and early 50s.

After the war, the UK was cut off from the technology developed by the USA in the Manhattan project, with the 1946 McMahon Act formally prohibiting any transfer of knowledge or nuclear materials outside the USA. The political imperative for the UK to build its own nuclear weapon is summed up by the reported comments of Ernest Bevin, the Foreign Secretary in the postwar Labour government: “We’ve got to have this thing over here, whatever it costs. We’ve got to have the bloody Union Jack on top of it.”

But even before a formal decision to make a nuclear weapon was made, in 1947, the infrastructure for the UK’s own nuclear weapons programme had been put in place, reflecting the experience of the returning UK scientists who had worked on the Manhattan Project. The first decision was to build a nuclear reactor in the UK, to make plutonium. This reflected the experience of the Manhattan project, which had highlighted the potential of the plutonium route to a nuclear weapon.

To put it crudely, it turned out to be easier to make a bomb from highly enriched uranium than from plutonium, but it was easier to make plutonium than highly enriched uranium. The problem with the plutonium route to the bomb is that irradiating uranium-235 with neutrons produces not just the fissile isotope Plutonium-239, but trace amounts of another isotope, Plutonium-240. Plutonium-240 undergoes spontaneous fission, emitting neutrons. Because of this the simplest design of a nuclear weapon – the gun design used for the Hiroshima bomb – will not work for plutonium, as the spontaneous fission causes premature detonation and low explosive yields. This problem was solved by the development of the much more complex implosion design, but there are still hard limits on the levels of plutonium-240 that can be tolerated in weapons grade plutonium, and these impose constraints on the design of reactors used to produce it.

The two initial UK plutonium production reactors were built in Sellafield – the Windscale Piles. The fuel was natural, unenriched, uranium (necessarily, because the uranium enrichment plant in Capenhurst had not yet been built), so this dictated the use of a graphite moderator. The reactors were air-cooled. The first reactor started operations in 1951, with the first plutonium produced in early 1952, enabling the UK’s first, successful, nuclear weapon test in October 1952.

But even as the UK’s first atom bomb test was successful, it was clear that the number of weapons the UK’s defense establishment was calling for would demand more plutonium than the Windscale piles could produce. At the same time, there was growing interest in using nuclear energy to generate electricity, at a time when coal was expensive and in short supply, and oil had to be imported and paid for with scarce US dollars. The decision was made to combine the two goals, with second generation plutonium producing reactors also producing power. The design would use graphite moderation, as in the Windscale piles, and natural uranium as a fuel, but rather than being air-cooled, the coolant was high pressure carbon dioxide. The exclusion of air made it possible to use a magnesium alloy as the casing for the fuel, which absorbed fewer neutrons than the aluminium used before.

The first of this new generation of dual purpose reactors – at Calder Hall, near Sellafield – was opened in 1956, just four years after the decision to build it. Ultimately four reactors of this design were produced – two at Calder Hall, and two at Chapelcross in Scotland. It’s important to stress that, although these reactors did supply power to the grid, they were optimised to produce plutonium for nuclear weapons, not to produce electricity efficiently. The key feature that this requirement dictated was the need to remove the fuel rods while the reactor was running; for weapons grade plutonium the exposure of uranium-238 to neutrons needs to be limited, to keep the level of undesirable plutonium 240 low. From the point of view of power production, this is sub-optimal, as it significantly lowers the effective fuel efficiency of the reactor; it also produces significantly greater quantities of nuclear waste.

The first generation of UK power reactors – the Magnox power stations – were an evolution of this design. Unlike Calder Hall and Chapelcross, they were under control of the Central Electricity Generating Board, rather than the Atomic Energy Authority, and were run primarily to generate electricity rather than weapons grade plutonium, using longer burn up times that produced plutonium with high concentrations of Pu-240. This so-called “civil plutonium” was separated from the irradiated fuel – there is now a stockpile of about 130 tonnes of this. Did the civil Magnox reactors produce any weapons grade plutonium? I don’t know, but I believe that there is no technical reason that would have prevented that.

Fast neutron reactors and the breeder dream

A reactor that doesn’t have a moderator is known as a fast-neutron reactor. This uses neutrons at the energy they have when emitted from the fission reaction, without slowing them down in a moderator. As mentioned above, the probability of a fast neutron colliding with a fissile nucleus is smaller than for a slow neutron, so this means that a fast-neutron reactor needs to use a fuel with a high proportion of fissile isotopes – either uranium highly enriched in U-235, or plutonium (both need to be in the form of the oxide, so the fuel doesn’t melt). In the absence of a moderator, the core of a fast neutron reactor is rather small, producing a lot of heat in a very small volume. This means that neither water nor gas is good enough as a coolant – fast neutron reactors to date have instead used liquid metal, most commonly molten sodium. As one might imagine, this poses considerable engineering problems.

But fast-neutron reactors have one remarkable advantage which has made many countries persist with a fast-neutron reactor programme, despite the difficulties. A fission reaction prompted by a fast neutron produces, on average, more additional neutrons than fission prompted by a slow neutron. This means that a fast-neutron reactor can produce more neutrons than are needed to maintain the chain reaction, and these additional neutrons can be used to “breed” additional fissile material. In effect, a fast-neutron reactor can produce more reactor fuel than it consumes, for example by converting non-fissile uranium-238 into fissile plutonium-239, or converting non-fissile thorium-232 into another fissile isotope of uranium, uranium-233.

In the 1940s and 50s, the availability of uranium relative to the demand of weapons programmes was severely limited, so the prospect of extracting energy from the much more abundant U-238 isotope was very attractive. Design studies for a UK fast neutron reactor started as early as 1951, with the strong backing of Christopher Hinton, the hard-driving ex-ICI engineer who ran the UK’s nuclear programme. An experimental fast reactor was built at Dounreay, in Caithness, which was completed by 1959. Using this experience, it was decided in 1966 to build a prototype fast power reactor, cooled with liquid sodium, with a 250 MW design electrical output.

The worldwide expansion of nuclear power in the 1970s seemed to strengthen the case for a breeder reaction even further, so the commissioning of the prototype fast reactor in 1974 seemed timely. However, in common with the experience of fast reactors elsewhere in the world, reliability was a problem, and the Dounreay reactor never achieved even 50% of its design output. Moreover, following the 1979 Three Mile Island accident, the worldwide expansion of nuclear power stalled, and the price of Uranium collapsed, undercutting the economic rationale for breeder reactors.

The winding down of the UK’s experiment with fast breeders was announced in Parliament in 1988: “The Government have carried out a review of the programme in the light of the expectation that commercial deployment of fast reactors in the United Kingdom will not now be required for 30 to 40 years. Our overall aim in the review has been to retain a position in the technology for the United Kingdom at economic cost.” Operations on the Dounreay prototype fast breeder came to an end in 1994, and in effect the UK’s position in the technology was lost. In the UK, as elsewhere in the world, the liquid metal cooled fast neutron breeder reactor proved a technological dead-end, where it remains – for now.

Submarines

Bombs are not the only military application of nuclear energy. Even before the 2nd World War ended, it was appreciated that a nuclear reactor would be an ideal power source for a submarine. Diesel-electric submarines need to surface frequently to run their engines and recharge their batteries; a submarine with a long-term power source that didn’t need oxygen, able to remain underwater for months on end, would be transformational for naval warfare. In the UK, work on a naval reactor began in the early 1950’s, and the UK’s first nuclear powered submarine, HMS Dreadnought, was launched in 1960. But HMS Dreadnought didn’t use UK nuclear technology; instead it was powered by a reactor of US design, a pressurised water reactor, using light water both as moderator and as coolant.

The father of the US nuclear navy was an abrasive and driven figure, Admiral Rickover. Rickover ran the US Navy’s project to develop a nuclear submarine, initially working at Oak Ridge National Laboratory in the late 1940’s. He selected two potential reactor designs – the pressurised water reactor devised by the physicist Alvin Weinberg, and a liquid sodium cooled, beryllium moderated reactor. Both were developed to the point of implementation, but it was the PWR that was regarded as the best (and particularly, the most reliable) design, and has been subsequently used for all Western nuclear submarines.

The prototype reactor went critical at a land-based test installation in 1953. At this time the first submarine was already under construction; the USS Nautilus went to sea only two years later, in 1955. The UK’s effort lagged considerably behind. In 1958, following the thawing of nuclear relations between the UK and the USA, Admiral Rickover offered the UK a complete nuclear propulsion system. It seems that this deal was sealed entirely on the basis of the personal relationship between Rickover and the UK’s Admiral of the Fleet, Lord Mountbatten. It came with two conditions. The first was that it should be a company to company deal, between the US contractor Westinghouse and the UK firm Rolls-Royce, rather than a government to government agreement. The second was that it was a one-off – Rolls-Royce would have a license to the Westinghouse design for a pressurised water reactor, but after that the UK was on its own. These two conditions have meant that there has been a certain separation between the UK’s naval reactor programme, as Rolls-Royce has developed further iterations of the naval PWR design, and the rest of its national nuclear enterprise.

Rickover’s rapid success in creating a working power reactor for submarines had far-reaching consequences for civil nuclear power. President Eisenhower’s 1953 “Atoms for Peace” speech committed the USA to developing civilian applications, and the quickest way to deliver on that was to build a nuclear power station building on the submarine work. Shippingport opened in 1957 – it was essentially a naval reactor repurposed to power a static power station, and was wholly uneconomic as an energy source, but it launched Westinghouse’s position as a supplier of civil nuclear power plants. Pressurised water reactors designed at the outset for civil use would evolve in a different direction to submarine reactors. For a submarine, reactors need to be highly compact, self-contained, and should be able to go for long periods without being refuelled, all of which dictates the use of highly enriched – essentially weapons grade – uranium. In civil use, to have any chance of being economic, uranium at much lower enrichment levels must be used, but designs can be physically bigger, and refuelling can be more frequent. By the 1960’s, Westinghouse was able to export civil PWRs to countries like Belgium and France, and it was a descendant of this design that was built in the UK at Sizewell B.

Imagined futures, alternative histories, and technological lock-in

The path of technological progress isn’t preordained, but instead finds a route through a garden of forking paths, where at each branch point the choice is constrained by previous decisions, and is influenced by uncertain guesses about where each of the different paths might lead.

So it’s a profound mistake to suppose that in choosing between different technological approaches to nuclear power, it is simply a question of choosing between a menu of different options. The choice depends on history – a chain of previous choices which have established which potential technological paths have been pursued and which ones have been neglected. It’s this that establishes what comprises the base of technological capability and underpinning knowledge – both codified and tacit – that will be exploited in the new technology. It depends on the existence of a wider infrastructure. A national nuclear programme comprises a system, which could include uranium enrichment facilities, fuel manufacturing, plutonium separation and other waste handling facilities – and, as we’ve seen, the scope of that system depends not just on a nation’s ambitions for civil nuclear power, but on its military ambitions and its weapons programme. And it depends on visions of the future.

In the early years of the Cold War, those visions were driven by paranoia, and a not unjustified fear of apocalypse. The McMahon act of 1946 had shut the UK out of any collaboration on nuclear weapons with the USA; the Soviet Union had demonstrated an atom bomb in 1949, following up in 1955 with a thermonuclear weapon in the megaton range. The architects of the UK nuclear programme – the engineer Christopher Hinton, and physicists William Penney and John Cockcroft, drove it forward with huge urgency. Achievements like delivering Calder Hall in just 4 years were remarkable – but achieved at the cost of cut corners and the accumulation of massive technical debt. We are still living with the legacy of that time – for example, in the ongoing, hugely expensive, clean-up of the nuclear waste left over in Sellafield from that period.

Energy worries dominated the 1970s, nationally and internationally. Conflicts in the Middle East led to an oil embargo and a major spike in the price of oil. The effect of this was felt particularly strongly in the USA, where domestic oil production had peaked in 1970, giving rise to fundamental worries about the worldwide exhaustion of fossil fuels. In the UK, industrial action in the coal mining industry led to rolling power cuts and a national three day week; the sense of national chaos leading to the fall of the Heath government. Fuel prices of all kinds – oil, coal and gas – seemed to be inexorably rising. For energy importers – and the UK was still importing around half its energy in the early 1970’s – security of energy supplies suddenly seemed fragile. In this environment, there was a wide consensus that the future of energy was nuclear, with major buildouts of nuclear power carried out in France, Germany, Japan and the USA.

By the 1990s, things looked very different. In the UK, the exploitation of North Sea oil and gas had turned the UK from an energy importer to an energy exporter. All aspects of fossil fuel energy generation and distribution had been privatised. In this world of apparent energy abundance, energy was just another commodity whose supply could safely be left to the market. And in an environment of high interest rates and low fuel prices, there was no place in the market for nuclear energy.

But if decisions about the technological directions are driven by visions of the future, they are constrained by the past. What is possible is determined by the infrastructure that’s been built already – uranium enrichment plants, reprocessing facilities, and so on. The nature of the stock of knowledge acquired in past R&D programmes will be determined by the problems that had emerged during those programmes, so starting work on a different class of reactors would render that knowledge less useful and necessitate new, expensive programmes of research. The skills and expertise that have been developed in past programmes – whether that is in the understanding of reactor physics that is needed to run them efficiently, or in the construction and manufacturing techniques to build them cheaply effectively – will be specific to the particular technologies that have been implemented in the past.

All this contributes to what is called “technological lock-in”. It isn’t obvious that the first class of power reactor ever developed – the pressurised water reactor – must be the optimum design, out of the large space of possible reactor types, particularly as it was originally designed for a different application – powering submarines – to the one it ended up being widely implemented for – generating power in static, civil power stations.

The UK’s decision to choose the Advanced Gas Cooled Reactor

So why did the UK’s state technocrats make the decision to roll out Advanced Gas Cooled reactors – and having made that decision, why did it take so long to reverse it? The straightforward answer is that this was another case of technological lock-in – the UK had developed an expertise in gas-cooled reactors which was genuinely world-leading, as a result of its decision in the Magnox programme to merge the goals of generating electricity and producing military plutonium. I believe there was a real conviction that the gas-cooled reactor was technically superior to the light-water designs, coupled with a degree of pride that this was an area that the UK had led in. As a UKAEA expert on gas-cooled reactors wrote in 1983, “Few other countries had the skills or resources to pioneer [gas-cooled reactors]; the easy option of the light water reactor developed by someone else has been irresistible”.

There were specific reasons to favour the AGR over PWRs – in particular, in the UK programmes there were worries about the safety of PWRs. These were particularly forcefully expressed by Sir Alan Cottrell, an expert on metallurgy and its applications in the nuclear industry, who was government Chief Scientific Advisor between 1971 and 1974. Perhaps, after Three Mile Island and Fukushima, one might wonder whether these worries were not entirely misplaced.

Later in the programme, while there may have been some agreement from its proponents that the early AGR building programme hadn’t gone well, there was a view that the teething problems had been more or less ironed out. I haven’t managed to find an authoritative figure for the final cost of the later AGR builds, but in 1980 it was reported in parliament that Torness was on track to be delivered at a budget of £1.1 bn (1980 prices), which is not a great deal different from the final cost of the Sizewell B PWR. Torness, like Sizewell B, took 8 years to build.

But I wonder whether the biggest factor in the UK’s nuclear establishment’s preference for the AGR over the PWR was from a sense that the AGR represented another step on a continuing path of technological progress, while the PWR was a mature technology whose future was likely to consist simply of incremental improvements. Beyond the AGRs, the UK’s nuclear technologists could look to the next generation of high temperature reactors, whose prototype – Dragon, at Winfrith – was already in operation, with the fast breeder reactor promising effectively unlimited fuel for a nuclear powered future. But that future was foreclosed by the final run-down of the UK’s nuclear programme in the 80s and 90s, driven by the logic of energy privatisation and cheap North Sea gas.

In the third and final part of this series, I will consider how this history has constrained the UK’s faltering post 2008 effort to revive a nuclear power industry, and what the future might hold.

Sources

For the history of the UK’s nuclear programme, both civil and military, I have relied heavily on: An Atomic Empire: A Technical History Of The Rise And Fall Of The British Atomic Energy Programme, by Charles Hill (2013)

Churchill’s Bomb, by Graham Farmelo (2013) is very illuminating on the early history of the UK’s atomic weapons programme, and on the troubled post-war nuclear relationship between the UK and USA.

On the technical details of nuclear reactors, Nuclear power technology. Volume 1. Reactor technology, edited by Walter Marshall (OUP, 1983) is still very clear. Marshall was Chair of the UK Atomic Energy Authority, then Chief Executive of the Central Electricity Generating Board, and most of the contributors worked for the UKAEA, so in addition to its technical value, the tone of the book gives some flavour of the prevailing opinion in the UK nuclear industry at the time.

On Sir Alan Cottrell’s opposition to PWRs on safety grounds, see his biographical memoir. This also provides an interesting glimpse at how intimately linked the worlds of academia, government scientific advice, and the UK’s nuclear programme (with the occasional incursion by Royalty) were in the 1960s and 70s.

Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

The shifting sands of UK Government technology prioritisation

In the last decade, the UK has had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle. This 3500 word piece looks at this history of instability in UK innovation policy, and suggests some principles of consistency and clarity which might give us some more stability in the decade to come. A PDF version can be downloaded here.

Introduction

The problem of policy churn has been identified in a number of policy areas as a barrier to productivity growth in the UK, and science and innovation policy is no exception to this. The UK can’t do everything – it represents less than 3% of the world’s R&D resources, so it needs to specialise. But recent governments have not found it easy to decide where the UK should put its focus, and then stick to those decisions.

In 2012 this the then Science Minister, David Willetts, launched an initiative which identified 8 priority technologies – the “Eight Great Technologies”. Willetts reflected on the fate of this initiative in a very interesting paper published last year. In short, while there has been consensus on the need for the UK to focus (with the exception of one short period), the areas of focus have been subject to frequent change.

Substantial changes in direction for technology policy have occurred despite the fact that we’ve had a single political party in power since 2010, with particular instability since 2015, in the period of Conservative majority government. Since 2012, the average life-span of an innovation policy has been about 2.5 years. Underneath the headline changes, it is true that there have been some continuities. But given the long time-scales needed to establish research programmes and to carry them through to their outcomes, this instability makes it different to implement any kind of coherent strategy.

Shifting Priorities: from “Eight Great Technologies”, through “Seven Technology Families”, to “Five Critical Technologies”

Table 1 summarises the various priority technologies identified in government policy since 2012, grouped in a way which best brings out the continuities (click to enlarge).

The “Eight Great Technologies” were introduced in 2012 in a speech to the Royal Society by the then Chancellor of the Exchequer, George Osborne; a paper by David Willetts expanded on the rationale for the choices . The 2014 Science and Innovation Policy endorsed the “Eight Great Technologies”, with the addition of quantum technology, which, following an extensive lobbying exercise, had been added to the list in the 2013 Autumn Statement.

2015 brought a majority Conservative government, but continuity in the offices of Prime Minister and Chancellor of the Exchequer didn’t translate into continuity in innovation policy. The new Secretary of State in the Department of Business, Innovation and Skills was Sajid Javid, who brought to the post a Thatcherite distrust of anything that smacked of industrial strategy. The main victim of this world-view was the innovation agency Innovate UK, which was subjected to significant cut-backs, causing lasting damage.

This interlude didn’t last very long – after the Brexit referendum, David Cameron’s resignation and the premiership of Theresa May, there was an increased appetite for intervention in the economy, coupled with a growing consciousness and acknowledgement of the UK’s productivity problem. Greg Clark (a former Science Minister) took over at a renamed and expanded Department of Business, Energy and Industrial Strategy.

A White Paper outlining a “modern industrial strategy” was published in 2017. Although it nodded to the “Eight Great Technologies”, the focus shifted to four “missions”. Money had already been set aside in the 2016 Autumn Statement for an “Industrial Strategy Challenge Fund” which would support R&D in support of the priorities that emerged from the Industrial Strategy.

2019 saw another change of Prime Minister – and another election, which brought another Conservative government, with a much greater majority, and a rather interventionist manifesto that promised substantial increases in science funding, including a new agency modelled on the USA’s ARPA, and a promise to “focus our efforts on areas where the UK can generate a commanding lead in the industries of the future – life sciences, clean energy, space, design, computing, robotics and artificial intelligence.”

But the “modern industrial strategy” didn’t survive long into the new administration. The new Secretary of State was Kwasi Kwarteng, from the wing of the party with an ideological aversion to industrial strategy. In 2021, the industrial strategy was superseded by a Treasury document, the Plan for Growth, which, while placing strong emphasis on the importance of innovation, took a much more sector and technology agnostic approach to its support. The Plan for Growth was supported by a new Innovation Strategy, published later in 2021. This did identify a new set of priority technologies – “Seven Technology Families”.

2022 was the year of three Prime Ministers. Liz Truss’s hard-line free market position was certainly unfriendly to the concept of industrial strategy, but in her 44 day tenure as Prime Minister there was not enough time to make any significant changes in direction to innovation policy.

Rishi Sunak’s Premiership brought another significant development, in the form of a machinery of government change reflecting the new Prime Minister’s enthusiasm for technology. A new department – the Department for Innovation, Science and Technology – meant that there was now a cabinet level Secretary of State focused on science. Another significant evolution in the profile of science and technology in government was the increasing prominence of national security as a driver of science policy.

This had begun in the 2021 Integrated Review , which was an attempt to set a single vision for the UK’s place in the world, covering security, defence, development and foreign policy. This elevated “Sustaining strategic advantage through science and technology” as one of four overarching principles. The disruptions to international supply chains during the covid pandemic, and the 2022 invasion of Ukraine by Russia and the subsequent large scale European land war, raised the issue of national security even higher up the political agenda.

A new department, and a modified set of priorities, produced a new 2023 strategy – the Science & Technology Framework – taking a systems approach to UK science & technology . This included a new set of technology priorities – the “Five critical technologies”.

Thus in a single decade, we’ve had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle.

Continuities and discontinuities

There are some continuities in substance in these technology priorities. Quantum technology appeared around 2013 as an addendum to the “Eight Great Technologies”, and survives into the current “Five Critical Technologies”. Issues of national security are a big driver here, as they are for much larger scale programmes in the USA and China.

In a couple of other areas, name changes conceal substantial continuity. What was called synthetic biology in 2012 is now encompassed in the field of engineering biology. Artificial Intelligence has come to high public prominence today, but it is a natural evolution of what used to be called big data, driven by technical advances in machine learning, more computer power, and bigger data sets.

Priorities in 2017 were defined as Grand Challenges, not Technologies. The language of challenges is taken up in the 2021 Innovation Strategy, which proposes a suite of Innovation Missions, distinct from the priority technology families, to address major societal challenges, in areas such as climate change, public health, and intractable diseases. The 2023 Science and Technology Framework, however, describes investments in three of the Five Critical Technologies, engineering biology, artificial intelligence, and quantum technologies, as “technology missions”, which seems to use the term in a somewhat different sense. There is room for more clarity about what is meant by a grand challenge, a mission, or a technology, which I will return to below.

Another distinction that is not always clear is between technologies and industry sectors. Both the Coalition and the May governments had industrial strategies that explicitly singled out particular sectors for support, including through support for innovation. These are listed in table 2. But it is arguable that at least two of the Eight Great Technologies – agritech, and space & satellites – would be better thought of as industry sectors rather than technologies.

Table 2 – industrial strategy sectors, as defined by the Coalition, and the May government.

The sector approach did underpin major applied public/private R&D programmes (such as the Aerospace Technology Institute, and the Advanced Propulsion Centre), and new R&D institutions, such as the Offshore Renewable Catapult Centre, designed to support specific industry sectors. Meanwhile, under the banner of Life Sciences, there is continued explicit support from the pharmaceutical and biotech industry, though here there is a lack of clarity about whether the primary goal is to promote the health of citizens through innovation support to the health and social care system, or to support pharma and biotech as high value, exporting, industrial sectors.

But two of the 2023 “five critical technologies” – semiconductors and future telecoms – are substantially new. Again, these look more like industrial sectors than technologies, and while no one can doubt their strategic importance in the global economy it isn’t obvious that the UK has a particularly strong comparative advantage in them, either in the size of the existing business base or the scale of the UK market (see my earlier discussion of the background to a UK Semiconductor Strategy).

The story of the last ten years, then, is a lack of consistency, not just in the priorities themselves, but in the conceptual basis for making the prioritisation – whether challenges or missions, industry sectors, or technologies.

From strategy to implementation

How does one turn from strategy to implementation: given a set of priority sectors, what needs to happen to turn these into research programmes, and then translate that research into commercial outcomes? An obvious point that nonetheless needs stressing, is that this process has long lead times, and this isn’t compatible with innovation strategies that have an average lifetime of 2.5 years.

To quote the recent Willetts review of the business case process for scientific programmes: “One senior official estimated the time from an original idea, arising in Research Councils, to execution of a programme at over two and a half years with 13 specific approvals required.” It would obviously be desirable to cut some of the bureaucracy that causes such delays, but it is striking that the time taken to design and initiate a research programme is of the same order as the average lifetime of an innovation strategy.

One data point here is the fate of the Industrial Strategy Challenge Fund. This was announced in the 2016 Autumn Statement, anticipating the 2017 Industrial Strategy White Paper, and exists to support translational research programmes in support of that Industrial Strategy. As we have seen, this strategy was de-emphasised in 2019, and formally scrapped in 2021. Yet the research programmes set up to support it are still going, with money still in the budget to be spent in FY 24/25.

Of course, much worthwhile research will be being done in these programmes, so the money isn’t wasted; the problem is that such orphan programmes may not have any follow-up, as new programmes on different topics are designed to support the latest strategy to emerge from central government.

Sometimes the timescales are such that there isn’t even a chance to operationalise one strategy before another one arrives. The major public funder of R&D, UKRI, produced a five year strategy in March 2022 , which was underpinned by the seven technology families. To operationalise this strategy, UKRI’s constituent research councils produced a set of delivery plans . These were published in September 2022, giving them a run of six months before the arrival of the 2023 Science and Innovation Framework, with its new set of critical technologies.

A natural response of funding agencies to this instability would be to decide themselves what best to do, and then do their best to retro-fit their ongoing programmes to new government strategies as they emerge. But this would defeat the point of making a strategy in the first place.

The next ten years

How can we do better over the next decade? We need to focus on consistency and clarity.

Consistency means having one strategy that we stick to. If we have this, investors can have confidence in the UK, research institutions can make informed decisions about their own investments, and individual researchers can plan their careers with more confidence.

Of course, the strategy should evolve, as unexpected developments in science and technology appear, and as the external environment changes. And it should build on what has gone before – for example, there is much of value in the systems approach of the 2023 Science and Innovation Framework.

There should be clarity on the basis for prioritisation. I think it is important to be much clearer about what we mean by Grand Challenges, Missions, Industry Sectors, and Technologies, and how they differ from each other. With sharper definitions, we might find it easier to establish clear criteria for prioritisation.

For me, Grand Challenges establish the conditions we are operating under. Some grand challenges might include:

  • How to move our energy economy to a zero-carbon basis by 2050;
  • How to create an affordable and humane health and social care system for an ageing population;
  • How to restore productivity growth to the UK economy and reduce the UK’s regional disparities in economic performance;
  • How to keep the UK safe and secure in an increasingly unstable and hostile world.

One would hope that there was a wide consensus about the scale of these problems, though not everyone will agree, nor will it always be obvious, what the best way of tackling them is.

Some might refer to these overarching issues as missions, using the term popularised by Mariana Mazzacuto , but I would prefer to refer to a mission as something more specific, with a sense of timescale and a definite target. The 1960’s Moonshot programme is often taken as an exemplar, though I think the more significant mission from that period was to create the ability for the USA to land a half tonne payload anywhere on the earth’s surface, with an accuracy of a few hundred meters or better.

The crucial feature of a mission, then, is that it is a targeted program to achieve a strategic goal of the state, that requires both the integration and refinement of existing technologies and the development of new ones. Defining and prioritising missions requires working across the whole of government, to identify the problems that the state needs to be solved, and that are tractable enough given reasonable technology foresight to be worth trying, and prioritising them.

The key questions for a judging missions, then, are, how much does the government want this to happen, how feasible is it given foreseeable technology, how well equipped is the UK to deliver it given its industrial and research capabilities, and how affordable is it?

For supporting an industry sector, though, the questions are different. Sector support is part of an active industrial strategy, and given the tendency of industry sectors to cluster in space, this has a strong regional dimension. The goals of industrial strategy are largely economic – to raise the economic productivity of a region or the nation – so the criteria for selecting sectors should be based on their importance to the economy in terms of the fraction of GVA that they supply, and their potential to improve productivity.

In the past industrial strategy has often been driven by the need to create jobs, but our current problem is productivity, rather than unemployment, so I think the key criteria for selecting sectors should be their potential to create more value through the application of innovation and the development of skills in their workforces.

In addition to the economic dimension, there may also be a security aspect to the choice, if there is a reason to suppose that maintaining capability in a particular sector is vital to national security. The 2021 nationalisation of the steel forging company, Sheffield Forgemasters, to secure the capability to manufacture critical components for the Royal Navy’s submarine fleet, would have been unthinkable a decade ago.

Industrial strategy may involve support for innovation, for example through collaborative programmes of pre-competitive research. But it needs to be broader than just research and development; it may involve developing institutions and programmes for innovation diffusion, the harnessing of public procurement, the development of specialist skills provision, and at a regional level, the provision of infrastructure.

Finally, on what basis should we choose a technology to focus on? By a technology priority, we refer to an emerging capability arising from new science, that could be adopted by existing industry sectors, or could create new, disruptive sectors. Here an understanding of the international research landscape, and the UK’s part of that, is a crucial starting point. Even the newest technology, to be implemented, depends on existing industrial capability, so the shape of the existing UK industrial base does need to be taken account. Finally, one shouldn’t underplay the importance of the vision of talented and driven individuals.

This isn’t to say that priorities for the whole of the science and innovation landscape need to be defined in terms of challenges, missions, and industry sectors.
A general framework for skills, finance, regulation, international collaboration, and infrastructure – as set out by the recent Science & Innovation Framework – needs to underlie more specific prioritisation. Maintaining the health of the basic disciplines is important to provide resilience in the face of the unanticipated, and it is important to be open to new developments and maintain agility in responding to them.

The starting point for a science and innovation strategy should be to realise that, very often, science and innovation shouldn’t be the starting point. Science policy is not the same as industrial strategy, even though it’s often used as a (much cheaper) substitute for it. For challenges and missions, defining the goals must come first; only then can one decide what advances in science and technology are needed to bring those in reach. Likewise, in a successful industrial strategy, close engagement with the existing capabilities of industry and the demands of the market are needed to define the areas of science and innovation that will support the development of a particular industry sector.

As I stressed in my earlier, comprehensive, survey of the UK Research and Development landscape, underlying any lasting strategy needs to be a settled, long-term view of what kind of country the UK aspires to be, what kind of economy it should have, and how it sees its place in the world.

Should Cambridge double in size?

The UK’s economic geography, outside London, is marked by small, prosperous cities in the south and east, and large, poor cities everywhere else. This leads to a dilemma for policy makers – should we try and make the small, successful, cities, bigger, or do the work needed to make our big cities more successful? The government’s emphasis seems to have swung back to expanding successful places in the South and East, with a particular focus on Cambridge.

Cambridge is undoubtedly a great success story for the UK, and potentially a huge national asset. Decades of investment by the state in research has resulted in an exemplary knowledge-based economy, where that investment in public R&D attracts in private sector R&D in even greater proportion. Cambridge has expanded recently, developing a substantial life science campus around the south of the city, moving engineering and physical sciences research to the West Cambridge site, and developing a cluster of digital businesses around the station. But its growth is constrained by poor infrastructure (water being a particular problem), aesthetic considerations in a historic city centre (which effectively rule out high rise buildings), and the political barriers posed by wealthy and influential communities who oppose growth.

We need an economic reality check too. How much economic difference would it make, on a national scale, if Cambridge did manage to double in size – and what are the alternatives? Here’s a very rough stab at some numbers.

The gross value added per person in Cambridge was £49,000 in 2018, well above the UK average of £29,000 [1]. In Greater Manchester, by contrast, GVA per person was about £25,000, well below the UK average. This illustrates the’s UK unusual and sub-optimal economic geography – in most countries, it’s the big cities that drive the economy. In contrast, in the UK, big second tier cities, like Manchester, Birmingham, Leeds and Glasgow, underperform economically and in effect drag the economy down.

Let’s do the thought experiment where we imagine Cambridge doubles its population, from 126,000 to 252,000, taking those people from Greater Manchester’s population of 2.8 million, and assuming that they are able to add the same average GVA per person to the Cambridge economy. Since the GVA per head in Cambridge is so much higher than in GM, this would raise national GVA by about £3 billion.

In the overall context of the UK’s economy, with a total GVA of £1,900 billion, £3 billion doesn’t make a material difference. The trouble with small cities is that they are small – so, no matter how successful economically they are, even doubling their size doesn’t make much of an impact at a national scale.

As an alternative to doubling the size of Cambridge, we could raise the productivity of Greater Manchester. To achieve a £3 billion increase in GM’s output, we’d need to raise the GVA per person by just over 4.2%, to a bit more than £26,000 – still below the UK average.

That’s the importance of trying to raise the productivity of big cities – they are big. Relatively marginal improvements in productivity in Greater Manchester, Leeds, Birmingham and the West Midlands, Sheffield, Glasgow and Cardiff could cumulatively start to make a material difference to the economy on a national scale. And we know where those improvements need to be made – for example in better public transport, more R&D and support for innovative businesses, providing the skills that innovative businesses need, by addressing poor housing and public health.

I do think Cambridge should be encouraged and supported to expand, to accommodate the private sector businesses that want to take advantage of the public investment in R&D that’s happened there, and to give the people they need to work for them somewhere affordable to live.

But, as Tom Forth and I have argued in detail elsewhere, we need more centres of R&D and innovation outside the Greater Southeast, particularly in those places where the private sector already makes big investments in R&D that aren’t supported by the public sector. The government has already made a commitment, in the Levelling Up White Paper, to increase public investment in R&D outside the Greater Southeast by a third by 2025. That commitment needs to be delivered, and built on by the next government.

Finally, we should ask ourselves whether we are fully exploiting the great assets that have been built in Cambridge, not just to support the economy of a small city in East Anglia, but to drive the economy of the whole nation. How could we make sure that if a Cambridge semiconductor spin-out is expanding, it builds its factory in Newport, Gwent, rather than Saxony or Hsinchu? How can we use the huge wealth of experience in the Cambridge venture capital community to support nascent VC sectors in places like Leeds? How could we make sure a Cambridge biotech spin-out does its clinical trials in Greater Manchester [2], and then then manufactures its medicine in Cheshire or on Merseyside?

Two things are needed to make this happen. Firstly, we need place-based industrial strategies to build the innovation, skills and manufacturing capacity in relevant sectors in other parts of the UK, so these places have the absorptive capacity to make the most of innovations emerging from Cambridge. Then, we need to build institutional links between the key organisations in Cambridge and those in other emerging regional centres. In this way, we could take full advantage of Cambridge’s position as a unique national asset.

[1]. Data here is taken from the ONS’s Regional Gross Value Added (balanced) dataset and mid-year population estimates, in both cases using 2018 data. The data for local authority areas on a workplace basis, but populations are for residents. This probably flatters the productivity number for Cambridge, as it doesn’t take account of people who live in neighbouring areas and commute into the city.

At another limit, one could ask what would happen if you doubled the population of the whole county of Cambridgeshire, 650,000. As the GVA per head at the county level is £31.5k, quite a lot less than the figure for Cambridge city, this makes surprisingly little difference to the overall result – this would increase GVA by £3.15 bn, the same as a 4.2% increase in GM’s productivity.

Of course, this poses another question – why the prosperity of Cambridge city doesn’t spill over very far into the rest of the county. Anyone who regularly uses the train from Cambridge via Ely and March to Peterborough might have a theory about that.

[2]. The recent government report on commercial clinical trials in the UK, by Lord O’Shaughnessy, highlighted a drop in patients enrolled in commercial clinical trials in the UK of 36% over the last six years. This national trend has been bucked in Greater Manchester, where there has been an increase of 19% in patient recruitment, driven by effective partnership between the NIHR Greater Manchester Clinical Research Network, the GM devolved health and social care system, industry and academia.

When did the UK’s productivity slowdown begin?

The UK is now well into a second decade of sluggish productivity growth, with far-reaching consequences for people’s standard of living, for the sustainability of public services, and (arguably) for the wider political environment. It has become usual to date the beginning of this new period of slow productivity growth to the global financial crisis around 2008, but I increasingly suspect that the roots of the malaise were already in place earlier in the 2000s.


UK Labour productivity. Data: ONS, Output per hour worked, chained volume measure, 7 July 2023 release. Fit: non-linear least squares fit to two exponential growth functions, continuous at break point. Best fit break point is 2004.9.

My plot shows the latest release of whole-economy quarterly productivity data from the ONS. I have fitted the data to a function representing two periods of exponential growth, with different time constants, constrained to be continuous at a time of break. There are four fitting parameters in this function – the two time constants, the level at the break point, and the time of break. My best fit shows a break point at 2004.9.


Residuals for the fit to the quarterly productivity data shown above.

The plot of the residuals to the fit is shown above. This shows that the goodness of fit is comparable across the whole time range (with the exception of the spikes representing the effect of the pandemic). There are deviations from the fit corresponding to the effect of booms and recessions, but the deviations around the time of the Global financial crisis are comparable with those in earlier boom/bust cycles.

How sensitive is the fit to the timing of the break point? I’ve redone the fits constraining the year of the break point, and calculated at each point the normalised chi-squares (i.e. the sum of the squared differences between data and model, divided by the number of data points). This is shown below.


Normalised chi-squared – i.e. sum of the squares of the differences between productivity data and the two exponential model, for fits where the time of break is constrained.

The goodness of fit varies smoothly around an optimum value of the time of break near 2005. A time of break at 2008 produces a materially worse quality of fit.

Can we quantify this further and attach a probability distribution to the year of break? I don’t think so using this approach – we have no reason to suppose that the deviations between model and fit are drawn from a Gaussian, which would be the assumption underlying traditional approaches to ascribing confidence limits to the fitting parameters. I believe there are Bayesian approaches to addressing this problem, and I will look into those for further work.

But for now, this leaves us with a hypothesis that the character of the UK economy, and the global context in which it operated, had already made the transition to a low productivity growth state by the mid-2000’s. In this view, the financial crisis was a symptom, not a cause, of the productivity slowdown.

What the UK should – and should not – do about semiconductors

What should be in the long-delayed UK Semiconductor Strategy? My previous series of three blogposts set out the global context, the UK’s position in the global semiconductor world, some thoughts on the future directions of the industry, and some of the options open to the UK. Here, in summary, is a list of actions I think the UK should – and should not – take.

1. The UK should… (& there’s no excuse not to)

The UK government has committed to spending £700m on an exascale computer. It should specify that processor design should be from a UK design house. After decades of talking about using government procurement to drive innovation, the UK government should give it a try.

Why?
The UK has real competitive strength in processor design, and this sub-sector will become more and more important. AI demands exponentially more computing power, but the end of Moore’s law limits supply of computing power from hardware improvements, so design optimisation for applications like AI becomes more important than ever.

2. The UK should… (though it probably won’t, as it would be expensive, difficult, & ideologically uncomfortable)

The UK government should buy ARM outright from its current owner, SoftBank, and float it on the London Stock Exchange, while retaining a golden share to prevent a subsequent takeover by an overseas company.

Why?
ARM is the only UK-based company with internationally significant scale & reach into global semiconductor ecosystem. It’s the sole anchor company for the UK semiconductor industry. Ownership & control matters; ARM’s current overseas ownership makes it vulnerable to takeover & expatriation.

Why not?
It would cost >£50 bn upfront. Most of this money would be recovered in a subsequent sale, and the government might even make a profit, but some money would be at risk. But it’s worth comparing this with the precedent of the post GFC bank nationalisations, at a similar scale.

3. The UK should not… (& almost certainly not possible in any case)

The UK should not attempt to create a UK based manufacturing capability in leading edge logic chips. This would need to be done by one of the 3 international companies with the necessary technical expertise – TSMC, Intel or Samsung.

Why not?
A single leading edge fab costs >£10’s billions. The UK market isn’t anywhere near big enough to be attractive by itself, and the UK isn’t in a position to compete with the USA & Europe in a $bn’s subsidy race.

Moreover, decades of neglect of semiconductor manufacturing probably means the UK doesn’t, in any case, have the skills to operate a leading edge fab.

4. The UK should not…

The UK should not attempt to create UK based manufacturing capability in legacy logic chips, which are still crucial for industrial, automotive & defence applications. The lesser technical demands of these older technologies mean this would be more feasible than manufacturing leading edge chips.

Why not?
Manufacturing legacy chips is very capital intensive, and new entrants have to compete, in a brutally cyclical world market, with existing plants whose capital costs have already been depreciated. Instead, the UK needs to work with like-minded countries (especially in Europe) to develop secure supply chains.

5. Warrants another look

The UK could secure a position in some niche areas (e.g. compound semiconductors for power electronics, photonics and optoelectronics, printable electronics). Targeted support for R&D, innovation & skills, & seed & scale-up finance could yield regionally significant economic benefits.

6. How did we end up here, and what lessons should we learn?

The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

The UK can’t afford to make the same mistakes with future technology waves. We need a properly resourced, industrial strategy applied consistently over decades, growing & supporting UK owned, controlled & domiciled innovative-intensive firms at scale.

What should the UK do about semiconductors? (PDF version)

In anticipation of the UK government’s promised semiconductor strategy, my last three posts have summarised the global state of the industry, the UK’s position in that industry, and suggested what, realistically, the UK’s options are for a semiconductor strategy.

Here are links to all three parts, and for convenience a PDF version of the whole piece.

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry
Part 3: towards a UK Semiconductor Strategy.

PDF version (1 MB):
What should the UK do about semiconductors?

What should the UK do about semiconductors? Part 3: towards a UK Semiconductor Strategy

We are currently waiting for the UK government to publish its semiconductor strategy. As context for such a strategy, my previous two blogposts have summarised the global state of the industry:

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry

Here I consider what a realistic and useful UK semiconductor strategy might include.

To summarise the global context, the essential nations in advanced semiconductor manufacturing are Taiwan, Korea and the USA for making the chips themselves. In addition, Japan and the Netherlands are vital for crucial elements of the supply chain, particularly the equipment needed to make chips. China has been devoting significant resource to develop its own semiconductor industry – as a result, it is strong in all but the most advanced technologies for chip manufacture, but is vulnerable to being cut off from crucial elements of the supply chain.

The technology of chip manufacture is approaching maturity; the very rapid rates of increase in computing power we saw in the 1980s and 1990s, associated with a combination of Moore’s law and Dennard scaling, have significantly slowed. At the technology frontier we are seeing diminishing returns from the ever larger investments in capital and R&D that are needed to maintain advances. Further improvements in computer performance are likely to put more premium on custom designs for chips optimised for specific applications.

The UK’s position in semiconductor manufacturing is marginal in a global perspective, and not a relative strength in the context of the overall UK economy. There is actually a slightly stronger position in the wider supply chain than in chip manufacture itself, but the most significant strength is not in manufacture, but design, with ARM having a globally significant position and newcomers like Graphcore showing promise.

The history of the global semiconductor industry is a history of major government interventions coupled with very large private sector R&D spending, the latter driven by dramatically increasing sales. The UK essentially opted out of the race in the 1980’s, since when Korea and Taiwan have established globally leading positions, and China has become a fast expanding new entrant to the industry.

The more difficult geopolitical environment has led to a return of industrial strategy on a huge scale, led by the USA’s CHIPS Act, which appropriates more than $50 billion over 5 years to reestablish its global leadership, including $39 billion on direct subsidies for manufacturing.

How should the UK respond? What I’m talking about here is the core business of manufacturing semiconductor devices and the surrounding supply chain, rather than information and communication technology more widely. First, though, let’s be clear about what the goals of a UK semiconductor strategy could be.

What is a semiconductor strategy for?

A national strategy for semiconductors could have multiple goals. The UK Science and Technology Framework identifies semiconductors as one of five critical technologies, judged against criteria including their foundational character, market potential, as well as their importance for other national priorities, including national security.

It might be helpful to distinguish two slightly different goals for the semiconductor strategy. The first is the question of security, in the broadest sense, prompted by the supply problems that emerged in the pandemic, and heightened by the growing realisation of the importance and vulnerability of Taiwan in the global semiconductor industry. Here the questions to ask are, what industries are at risk from further disruptions? What are the national security issues that would arise from interruptions in supply?

The government’s latest refresh of its integrated foreign and defence strategy promises to “ensure the UK has a clear route to assured access for each [critical technology], a strong voice in influencing their development and use internationally, a managed approach to supply chain risks, and a plan to protect our advantage as we build it.” It reasserts as a model introduced in the previous Integrated Review the “own, collaborate, access” framework.

This framework is a welcome recognition of the the fact that the UK is a medium size country which can’t do everything, and in order to have access to the technology it needs, it must in some cases collaborate with friendly nations, and in others access technology through open global markets. But it’s worth asking what exactly is meant by “own”. This is defined in the Integrated Review thus: “Own: where the UK has leadership and ownership of new developments, from discovery to large-scale manufacture and commercialisation.”

In what sense does the nation ever own a technology? There are still a few cases where wholly state owned organisations retain both a practical and legal monopoly on a particular technology – nuclear weapons remain the most obvious example. But technologies are largely controlled by private sector companies with a complex, and often global ownership structure. We might think that the technologies of semiconductor integrated circuit design that ARM developed are British, because the company is based in Cambridge. But it’s owned by a Japanese investment bank, who have a great deal of latitude in what they do with it.

Perhaps it is more helpful to talk about control than ownership. The UK state retains a certain amount of control of technologies owned by companies with a substantial UK presence – it has been able in effect to block the purchase of the Newport Wafer Fab by the Chinese owned company Nexperia. But this new assertiveness is a very recent phenomenon; until very recently UK governments have been entirely relaxed about the acquisition of technology companies by overseas companies. Indeed, in 2016 ARM’s acquisition by Softbank was welcomed by the then PM, Theresa May, as being in the UK’s national interest, and a vote of confidence in post-Brexit Britain. The government has taken new powers to block acquisitions of companies through the National Security and Investment Act 2021, but this can only be done on grounds of national security.

The second goal of a semiconductor strategy is as part of an effort to overcome the UK’s persistent stagnation of economic productivity, to “generate innovation-led economic growth” , in the words of a recent Government response to a BEIS Select Committee report. As I have written about at length, the UK’s productivity problem is serious and persistent, so there’s certainly a need to identify and support high value sectors with the potential for growth. There is a regional dimension here, recognised in the government’s aspiration for the strategy to create “high paying jobs throughout the UK”. So it would be entirely appropriate for a strategy to support the existing cluster in the Southwest around Bristol and into South Wales, as well as to create new clusters where there are strengths in related industry sectors

The economies of Taiwan and Korea have been transformed by their very effective deployment of an active industrial strategy to take advantage of an industry at a time of rapid technological progress and expanding markets. There are two questions for the UK now. Has the UK state (and the wider economic consensus in the country) overcome its ideological aversion to active industrial strategy on the East Asian model to intervene at the necessary scale? And, would such an intervention be timely, given where semiconductors are in the technology cycle? Or, to put it more provocatively, has the UK left it too late to capture a significant share of a technology that is approaching maturity?

What, realistically, can the UK do about semiconductors?

What interventions are possible for the UK government in devising a semiconductor strategy that addresses these two goals – of increasing the UK’s economic and military security by reducing its vulnerability to shocks in the global semiconductor supply chain, and of improving the UK’s economic performance by driving innovation-led economic growth? There is a menu of options, and what the government chooses will depend on its appetite for spending money, its willingness to take assets onto its balance sheet, and how much it is prepared to intervene in the market.

Could the UK establish the manufacturing of leading edge silicon chips? This seems implausible. This is the most sophisticated manufacturing process in the world, enormously capital intensive and drawing on a huge amount of proprietary and tacit knowledge. The only way it could happen is if one of the three companies currently at or close to the technology frontier – Samsung, Intel or TSMC – could be enticed to establish a manufacturing plant in the UK. What would be in it for them? The UK doesn’t have a big market, it has a labour market that is high cost, yet lacking in the necessary skills, so its only chance would be to advance large direct subsidies.

In any case, the attention of these companies is elsewhere. TSMC is building a new plant in Arizona, at a cost of $40 billion, while Samsung’s new plant in Texas is costing $25 billion, with the US government using some of the CHIPS act money to subsidise these investments. Despite Intel’s well-reported difficulties, it is planning significant investment in Europe, supported by inducements from EU and its member states under the EU Chips act. Intel has committed €12 billion to expanding its operations in Ireland and €17 billion for a new fab in the existing semiconductor cluster in Saxony, Germany.

From the point of view of security of supply, it’s not just chips from the leading edge that are important; for many applications, in automobiles, defence and industrial machinery, legacy chips produced by processes that are no longer at the leading edge are sufficient. In principle establishing manufacturing facilities for such legacy chips would be less challenging than attempting to establish manufacturing at the leading edge. However, here, the economics of establishing new manufacturing facilities is very difficult. The cost of producing chips is dominated by the need to amortise the very large capital cost of setting up a fab, but a new plant would be in competition with long-established plants whose capital cost is already fully depreciated. These legacy chips are a commodity product.

So in practise, our security of supply can only be assured by reliance on friendly countries. It would have been helpful if the UK had been able to participate in the development of a European strategy to secure semiconductor supply chains, as Hermann Hauser has argued for. But what does the UK have to contribute, in the creation of more resilient supply chains more localised in networks of reliably friendly countries?

The UK’s key asset is its position in chip design, with ARM as the anchor firm. But, as a firm based on intellectual property rather than the big capital investments of fabs and factories, ARM is potentially footloose, and as we’ve seen, it isn’t British by ownership. Rather it is owned and controlled by a Japanese conglomerate, which needs to sell it to raise money, and will seek to achieve the highest return from such a sale. After the proposed sale to Nvidia was blocked, the likely outcome now is a floatation on the US stock market, where the typical valuations of tech companies are higher than they are in the UK.

The UK state could seek to maintain control over ARM by the device of a “Golden Share”, as it currently does with Rolls-Royce and BAE Systems. I’m not sure what the mechanism for this would be – I would imagine that the only surefire way of doing this would be for the UK government to buy ARM outright from Softbank in an agreed sale, and then subsequently float it itself with the golden share in place. I don’t suppose this would be cheap – the agreed price for the thwarted Nvidia take over was $66 billion. The UK government would then attempt to recoup as much of the purchase price as possible through a subsequent floatation, but the presence of the golden share would presumably reduce the market value of the remaining shares. Still, the UK government did spend £46 billion nationalising a bank.

What other levers does the UK have to consolidate its position in chip design? Intelligent use of government purchasing power is often cited as an ingredient of a successful industrial policy, and here there is an opportunity. The government made the welcome announcement in the Spring Budget that it would commit £900 m to build an exascale computer to create a sovereign capability in artificial intelligence. The procurement process for this facility should be designed to drive innovation in the design, by UK companies, of specialised processing units for AI with lower energy consumption.

A strong public R&D base is a necessary – but not sufficient – condition for an effective industrial strategy in any R&D intensive industry. As a matter of policy, the UK ran down its public sector research effort in mainstream silicon microelectronics, in response to the UK’s overall weak position in the industry. The Engineering and Physical Research Council announces on its website that: “In 2011, EPSRC decided not to support research aimed at miniaturisation of CMOS devices through gate-length reduction, as large non-UK industrial investment in this field meant such research would have been unlikely to have had significant national impact.” I don’t think this was – or is – an unreasonable policy given the realities of the UK’s global position. The UK maintains academic research strength in areas such III-V semiconductors for optoelectronics, 2-d materials such as graphene, and organic semiconductors, to give a few examples.

Given the sophistication of state of the art microelectronic manufacturing technology, for R&D to be relevant and translatable into commercial products it is important that open access facilities are available to allow the prototyping of research devices, and with pilot scale equipment to demonstrate manufacturability and facilitate scale-up. The UK doesn’t have research centres on the scale of Belgium’s IMEC, or Taiwan’s ITRI, and the issue is whether, given the shallowness of the UK’s industry base, there would be a customer base for such a facility. There are a number of university facilities focused on supporting academic researchers in various specialisms – at Glasgow, Manchester, Sheffield and Cambridge, to give some examples. Two centres are associated with the Catapult Network – The National Printable Electronics Centre in Sedgefield, and the Compound Semiconductor Catapult in South Wales.

This existing infrastructure is certainly insufficient to support an ambition to expand the UK’s semiconductor sector. But a decision to enhance this research infrastructure will need a careful and realistic evaluation of what niches the UK could realistically hope to build some presence in, building on areas of existing UK strength, and understanding the scale of investment elsewhere in the world.

To summarise, the UK must recognise that, in semiconductors, it is currently in a relatively weak position. For security of supply, the focus must be on staying close to like-minded countries like our European neighbours. For the UK to develop its own semiconductor industry further, the emphasis must be on finding and developing particular niches where the UK’s does have some existing strength to build on, and there is the prospect of rapidly growing markets. And the UK should look after its one genuine area of strength, in chip design.

Four lessons for industrial strategy

What should the UK do about semiconductors? Another tempting, but unhelpful, answer is “I wouldn’t start from here”. The UK’s current position reflects past choices, so to conclude, perhaps it’s worth drawing some more general lessons about industrial strategy from the history of semiconductors in the UK, and globally.

1. Basic research is not enough

The historian David Edgerton has observed that it is a long-running habit of the UK state to use research policy as a substitute for industrial strategy. Basic research is relatively cheap, compared to the expensive and time-consuming process of developing and implementing new products and processes. In the 1980’s, it became conventional wisdom that governments should not get involved in applied research and development, which should be left to private industry, and, as I recently discussed at length, this has profoundly shaped the UK’s research and development landscape. But excellence in basic research has not produced a competitive semiconductor industry.

The last significant act of government support for the semiconductor industry in the UK was the Alvey programme of the 1980s. The programme was not without some technical successes, but it clearly failed in its strategic goal of keeping the UK semiconductor industry globally competitive. As the official evaluation of the programme concluded in 1991 [1]: “Support for pre-competitive R&D is a necessary but insufficient means for enhancing the competitive performance of the IT industry. The programme was not funded or equipped to deal with the different phases of the innovation process capable of being addressed by government technology policies. If enhanced competitiveness is the goal, either the funding or scope of action should be commensurate, or expectations should be lowered accordingly”.

But the right R&D institutions can be useful; the experience of both Japan and the USA shows the value of industry consortia – but this only works if there is already a strong, R&D intensive industry base. The creation of TSMC shows that it is possible to create a global giant from scratch, and this emphasises the role of translational research centres, like Taiwan’s ITRI and Belgium’s IMEC. But to be effective in creating new businesses, such centres need to have a focus on process improvement and manufacturing, as well as discovery science.

2. Big is beautiful in deep tech.

The modern semiconductor industry is the epitome of “Deep Tech”: hard innovation, usually in the material or biological domains, demanding long term R&D efforts and large capital investments. For all the romance of garage-based start-ups, in a business that demands up-front capital investments in the $10’s of billions and annual research budgets on the scale of medium size nation states, one needs serious, large scale organisations to succeed.

The ownership and control of these organisations does matter. From a national point of view, it is important to have large firms anchored to the territory, whether by ownership or by significant capital investment that would be hard to undo, so ensuring the permanence of such firms is the legitimate business of government. Naturally, big firms often start as fast growing small ones, and the UK should make more effort to hang on to companies as they scale up.

3. Getting the timing right in the technology cycle

Technological progress is uneven – at any given time, one industry may be undergoing very dramatic technological change, while other sectors are relatively stagnant. There may be a moment when the state of technology promises a period of rapid development, and there is a matching market with the potential for fast growth. Firms that have the capacity to invest and exploit such “windows of opportunity”, to use David Sainsbury’s phrase, will be able to generate and capture a high and rising level of added value.

The timing of interventions to support such firms is crucial, and undoubtedly not easy, but history shows us that nations that are able to offer significant levels of strategic support at the right stage can see a material impact on their economic performance. The recent rapid economic growth of Korea and Taiwan is a case in point. These countries have gone beyond catch-up economic growth, to equal or surpass the UK, reflecting their reaching the technological frontier in high value sectors such as semiconductors. Of course, in these countries, there has been a much closer entanglement between the state and firms than UK policy makers are comfortable with.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

4. If you don’t choose sectors, sectors will choose you

In the UK, so-called “vertical” industrial strategy, where explicit choices are made to support specific sectors, have long been out of favour. Making choices between sectors is difficult, and being perceived to have made the wrong choices damages the reputation of individuals and institutions. But even in the absence of an explicitly articulated vertical industrial strategy, policy choices will have the effect of favouring one sector over another.

In the 1990s and 2000s, UK chose oil and gas and financial services over semiconductors, or indeed advanced manufacturing more generally. Our current economic situation reflects, in part, that choice.

[1] Evaluation of the Alvey Programme for Advanced Information Technology. Ken Guy, Luke Georghiou, et al. HMSO for DTI and SERC (1991)