Taking Anglofuturism seriously

Regular readers of this blog won’t need reminding that the UK is in a stagnant bind, with economic measures like productivity and GDP per person flatlining since the global financial crisis (or earlier). The consequences are felt well beyond these arid economic aggregates; wage growth has slowed down, successive governments find it hard to fund acceptable public services, and there’s a palpable sour sense of malaise in our politics.

One interesting response to this has been the emergence of a loose constellation of commentators, activists and pressure groups, a techno-optimist movement calling for more houses to be built, for the barriers apparently stopping the country building infrastructure to be swept away, for cheaper and more abundant energy.

Britain Remade wants to “reform the planning process to deliver more clean energy projects, transport infrastructure, and new good quality housing at speed”, while the yimby Alliance , as keen subscribers to the “Housing theory of everything”, focus on the need to build more houses. A very widely talked about paper, Foundations: Why Britain has stagnated focuses on housing, infrastructure, and the cost of energy. Rian Chad Whitton likewise focuses on high energy prices, connecting this with the decline of the UK’s manufacturing base.UKDayOne focus on science, innovation and technology as the motor for UK growth and prosperity, particularly emphasising AI and nuclear power.

I’m going to follow Tom Ough and Calum Drysdale in gathering these strands together under the banner “Anglofuturism”. Their eponymous, and interesting, podcast embraces a cheerful and optimistic version of this vision, with its whimsical AI generated illustrations of flying pubs and thatched space stations.

But I believe the term (in its current manifestation, at least) was coined by the journalist Aris Roussinos, in rather darker hues. This was a call for rebuilt state capacity in a definitively post-liberal world, a vision that owed less to Adam Smith, and more to Thomas Hobbes, which some readers might think more appropriate to deteriorating geopolitical situation we face.

I don’t think there is an entirely consistent underlying political ideology here, but I think it’s fair to say that there’s a common centre of gravity on the centre right. This isn’t the place to analyse political antecedents or implications, and I’m not the right person to do that, but I do want to make some remarks about this emerging movement.

There is much in this agenda that I applaud and agree with. The UK needs to get back to productivity growth, and there is no fundamental reason why that shouldn’t happen. We haven’t reached some final technological barrier – far from it. And I think there’s a profoundly humanistic perspective at work here – people should be able to enjoy the fruits of prosperity.

Of course, there is an opposing argument that believes that continued economic growth is inconsistent with planetary limits. It’s clear that we need to move to a new model of economic growth that doesn’t impose externalities on the global environment, and in particular we need to shift our energy economy to one that doesn’t depend on fossil fuels. But to embrace “degrowth” is in my view both politically infeasible and, if sufficient will and resources are applied, technologically unnecessary. To put it another way, the last 15 years in the UK have been an experiment in degrowth, and the results have been ugly.

There’s an undercurrent of generational justice here too. The perception that young people in the UK can’t look forward to the same lifestyle as their parents is profoundly depressing. Nowhere is this more obvious than in the unaffordability of housing.

Where I think these analyses are less convincing is in identifying the origins of our current problems. In particular, I think an explanation of our current productivity stagnation needs to account for its timing. It’s certainly convincing to argue, as these authors do, that we would be better off if the UK had built more infrastructure over the last few decades, but I don’t think they really convince in talking about what conditions would have produced that outcome. Anglofuturism, in all its varieties, could be accused of willing worthy ends, without really specifying the means.


Labour productivity in the UK since the Industrial Revolution. Data from the Bank of England A millennium of macroeconomic data dataset, plot & fits by the author.

The Foundations paper puts a lot of blame on the 1947 Town and Country Planning Act – and the wider Attlee settlement. But I don’t think this makes sense in terms of the timing. As my figure shows, the period of fastest productivity growth in the entire history of the UK took place between 1948 and 1972. In fact, Roussinos harks back to this period, referring to “the optimism and high modernism of the post-war era, a vanished world of frenetic housebuilding and technological innovation where British scientific research could lead the world, and produce higher living standards through its fusion with well-paid, high-skilled labour.”


Labour productivity in the UK since 1970. ONS data, fit by the author. For the rationale for putting the break around 2005, see When did the UK’s productivity slowdown begin?

What needs to be explained is that the current slowdown began in the mid-2000s. There is some overlap with a developing consensus view from mainstream economics that the immediate problem has been a lack of investment in the UK economy (see e.g. The Productivity Agenda). This includes public investment in hard infrastructure, private investment in capital goods, and investment in intangibles like R&D. In my own work I’ve emphasised the significant reduction in the R&D intensity of the UK economy between 1980 and 2005, and given the generally technocentric flavour of the Anglofuturists, I’m surprised that this aspect isn’t more prominent in their arguments.


From Research, innovation and the R&D landscape, by R.A.L. Jones, in The Productivity Agenda.

Even if one agrees that investment levels have been too low, there isn’t really a consensus about the ultimate cause of the lack of investment. One common thread is a sense that building infrastructure in the UK has become too expensive because of excessive regulation. In one sense, this is a reflection of the fact that the comparative advantage of the UK is to be found in professional services. One can celebrate that fact that the UK has become a “services superpower”, but the downside was caustically expressed in this comment from Dan Davies

Giles Wilkes has discussed what he terms the “crud economy” at a bit more length. Economic actors respond to incentives, and this doesn’t always direct activity towards where we need it. As Giles puts it: “We need vastly more clean energy, actual hard defence equipment for handling nasty rogue nations, the soldiers to use it, and much more numerous and productive care and health workers for the ageing population. Mitigating the dangerous effects of climate change is going to take real physical capital and effort. These are actual hard problems – and being able to produce more streaming videos, intelligent AI-related chat, or brilliant legal ‘solutions’ to financial market problems is not exchangeable for the assets we need for the real problems. Just because the lawyer’s fee is expressed in dollars, and so is the cost of transforming the US electricity system, doesn’t mean the two can get traded together.”

One thing all branches of Anglofuturism agree on is the need for abundant, cheap energy, and on the bad economic effects that current high industrial energy prices are causing. This clearly causes strong feelings, to judge by the violent on-line reaction to Tom Forth’s entirely reasonable, from a classical market liberal perspective, comments about this, arguing that, while this situation was not good, it was “a smaller problem and of a lower priority than many other restrictions on growth in Britain.”

I agree that it would be better if energy prices in the UK were lower, but I think it is important to understand how this situation has arisen. High industrial energy prices now are causing serious problems for what industry remains in the UK, but I don’t think they can be blamed for the UK’s greater degree of deindustrialisation that its neighbours. This took place at a time when energy prices were low and falling.

The decision the UK government made in the 1980s was that energy was just another commodity whose supply could be left to the market. As it happened, this coincided with a moment in time when the UK switched from being a net importer of energy, to being a net exporter, having found abundant supplies of natural gas and oil in the North Sea. North Sea oil and gas production peaked around 2000, and the country switched to being an energy importer again in 2004. The UK’s relative success in decarbonising its electricity supply initially relied on an early switch from coal to gas; even after the more recent expansion of offshore wind the price of electricity is set by the internationally traded price of gas. This was fine until it wasn’t – in the 2022 gas price spike.

If our problem is that we rely on imported gas, whose fluctuating price is beyond our control, together with offshore wind, which is necessarily intermittent (as well as being generated a long way from where it is needed, connected by an inadequate grid), would it not be better if a much higher proportion of our energy was generated by nuclear fission?

An enthusiasm for nuclear power is a common thread running through all strands of Anglofuturism, and it’s one with which I have much sympathy. For all the progress there’s been in renewable energy, in 2022, 77.8% of our energy still came from oil, gas and coal, and I think it’s going to be difficult to have a fossil-fuel free energy economy which doesn’t depend on some nuclear power to provide firm energy . I deeply regret the failure of the nuclear new build programme of recent governments – of the 18 GW of new generating capacity planned in 2014, only 3.2 GW is even under construction.

But I think it is important, and salutary, to understand why this failure has occurred. My recent blog posts go into the story of the UK’s civil nuclear power programme in some detail . There are ways in which the regulatory and planning framework for civil nuclear could be streamlined, but the fundamental problem with Hinkley C wasn’t the fish disco. It was the fact that the UK government wanted the Chinese state to pay for it, and the French state to build it, as the UK state no longer had the will or capacity to do either.

The UK’s own civil nuclear industry was killed in the 1990s; in an environment of high interest rates and low natural gas prices, and an ideological commitment to leave energy supply to the market, there was no place for it. I do think the UK should recreate its capacity to build nuclear power stations, including the small modular reactors that are currently attracting much attention, but I don’t think this will happen without substantial state intervention.

I agree with the Anglofuturists that we shouldn’t resign ourselves to our current economic failures. I think we need to ask ourselves what has gone wrong with the variety of capitalism that we have, that has led us to this stagnation. It’s a problem that’s not unique to the UK, but which seems to have affected the UK more seriously than most other developed countries. The slowdown seems to have begun in the 2000s, crystallising in full at the Global Financial Crisis.

This timing points to changes in the nature of capitalism and political economy that took hold in the decades after 1980, with the ascendancy of
market liberalism, the doctrine of shareholder value in corporate management, and an enthusiasm for outsourcing government functions to private contractors, no matter how central to the core purposes of the state they might appear to be. In the UK, even the Atom Weapons Establishment has been run by private contractors since 1989, with the government only taking ownership and control back from SERCO in 2020.

We have a new form of globalisation that followed from abolishing capital controls, together with a conviction that one doesn’t need to worry about the balance of payments, even though the persistent trade deficits the UK has run since then has meant ownership and control of national assets has moved overseas. We have a financial system that seems unable to direct resources to those activities that lead to long-term growth. We have a hollowed out state, that now lacks the capacity even to be an informed and effective contractor for services.

I agree with the Anglofuturists that our current stagnation isn’t inevitable, and I applaud their lack of defeatism. It doesn’t have to be this way – but to get beyond our current malaise, I think we need to ask some deeper questions about how our economy is run.

Revisiting the UK’s nuclear AGR programme: 3. Where next with the UK’s nuclear new build programme? On rebuilding lost capabilities, and learning wider lessons

This is the third and concluding part of a series of blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects.

In the second post, “What led to the AGR decision? On nuclear physics – and nuclear weapons” I turned to consider the technical and political issues that led to this decision.

In this post, I bring the story up to date, discussing why post-2010 plans for new nuclear build have largely failed, and look to the future, with new ambitions for small modular reactors – and, ironically, a potential return to high temperature, gas cooled reactors that represent an evolution of the AGR.

Into the 2010’s and beyond – the UK’s failed Nuclear New Build programme

In the early 2010’s, the Coalition Government developed an ambitious plan to replace the UK’s ageing nuclear fleet, with new light water reactors to be built on the existing nuclear sites, involving four different designs from four different vendors. The French state nuclear company was to build 2 of its next generation pressurised water reactors – the European Pressurised Water Reactor (EPR) – at Hinkley, and another 2 at Sizewell. The Chinese state nuclear corporation, CGN would install 2 (or possibly 3) of its own PWR designs at Bradwell. At Moorside, in Cumbria, Toshiba/Westinghouse would build 3 of its AP1000 PWRs. At Wylfa, in North Wales, Hitachi would build two Advanced Boiling Water Reactors, with another two ABWRs to be built at Oldbury. In total this would give 18 GW of new nuclear capacity, producing roughly double the output of the AGR fleet. In 2013, this programme formally got underway, with the announcement of a deal with EDF to deliver the first of these new plants, at Hinkley Point.

This programme has largely failed. A decade on, only one project is under construction – Hinkley Point C, where the best estimate for when the two EPRs will come into service is 2030. The cost for this 3.2 GW capacity is now estimated as being between £31 bn and £34 bn, in 2015 prices, compared to an original estimate of £20 bn. To put this into context, the last nuclear power station built in the UK, the PWR at Sizewell B, cost about £2 bn, in 1987 prices for a 1.2 GW unit. Scaling this to the 3.2 GW capacity of the Hinkley Point project, and accounting for inflation, this would correspond to about £12 bn in 2015 prices. Where has this 250% increase in nuclear construction cost since Sizewell B come from? There are essentially two broad classes of reasons.

Firstly, more recent designs of pressurised water reactor, such as the EPR, or the Westinghouse AP1000, have a number of new safety features, to mitigate some of the fundamental weaknesses of the pressurised water reactor design, particularly its vulnerability to loss of coolant accidents. These new features include methods for passive cooling in the case of loss of power to the main cooling system, a “core catcher” system which contains molten core material in the event of a meltdown, and more robust containment systems, designed to resist, for example, an aircraft crashing into the reactor building. These new features all add unavoidable extra cost.

In addition to these unavoidable cost increases, some of the increase in construction cost must reflect a substantial real reduction in the UK’s ability to deliver a big complex project like a nuclear power station. One would hope that, if subsequent power stations are built to the same design with the construction teams kept in place, in the light of experience, the development of functional supply chains, and the creation of a skilled workforce, these costs could be reduced.

A sister plant to Hinkley Point, at Sizewell, has received a nuclear site license, but awaits a final investment decision. The capital for Hinkley Point C was provided entirely by its investors, which included the French state-owned energy company EDF and the Chinese state nuclear company CGN, in return for a guarantee of a fixed price for the electricity the plant generated over the first 35 years of operation. Thus the cost of the overrun in budget is borne by the investors, not the UK government or UK consumers. The deal was constructed in a way that was very favourable to the investors, so there was some cushion there, but the experience of Hinkley Point C means that it’s now impossible to attract investors to build further power stations on these terms. The financing for Sizewell C, if it goes ahead, will involve more direct UK state investment, as well as payments to the company building it while the reactor is under construction. These up-front payments will be added to electricity consumers’ bills through the so-called “Regulated Asset Base” mechanism, reducing the cost to the company of borrowing money during the long construction period.

So, sixteen years on from the in-principle commitment to return to nuclear power, no plant has yet been completed, and the best that can be hoped for from the plan to build 18 GW of new capacity is that we will have 6.4 GW of capacity from Hinkley C, and Sizewell C, if the latter goes ahead.

Why has the UK’s nuclear new build programme failed so badly? The original plans were misconceived on many levels. The plan to involve the Chinese state so closely seemed naive at the time, and given the changed geopolitical environment since then, it now seems almost unbelievable that a UK government could countenance it. The idea of having multiple competing vendors and designs makes it much more difficult to drive costs down through “learning by doing”; the most successful build-outs of nuclear power – in France and Korea – have relied on “fleet build” – sequential installations of standardised designs. And the reliance on overseas investors and overseas designs meant that the UK had no control over the supply chain, meaning that little of the high value work involved in the programme would benefit the UK economy.

At the root of this failure were the UK government’s unwise ideological commitments to privatised energy markets, making it resist any subsidies for nuclear power, and refuse to issue new government debt to pay for infrastructure. The legacy of the run-down of the UK’s civil nuclear programme in the 1990’s was a lack of significant UK government expertise in the area, making it an uninformed and naive customer, and a lack of an industry in the UK in a position to benefit from the expenditure.

Could there be another way? Since 2014, the UK government has expressed interest in the idea of small modular reactors (SMRs), and has given some support for design studies, with the UK company Rolls-Royce setting up a unit to commercialise them.

Back to the future – hopes for light water small modular reactors

There’s been a seemingly inexorable trend towards larger and larger pressurised water reactors – and, as we have seen at Hinkley C, that trend of increasing size has been accompanied by a dismal record of cost overruns and construction delays. There are, in principle, economies of scale in operating costs to be gained with very large units. But, as I’ve stressed above, the economics of nuclear power is dominated by the upfront capital cost of building reactors in the first place. If one, instead, built multiple smaller reactors, small enough for much of the construction to take place in factories, where manufacturing processes could be optimised over multiple units, one might hope to drive the costs down through “learning by doing”. This is the logic behind the enthusiasm for small modular reactors.

There’s nothing new about a small pressurised water reactors – by the standards of today’s power reactors, Admiral Rickover’s submarine reactors were tiny. Significantly, as I discussed above, the only remaining UK capability in nuclear reactors is to be found in Rolls-Royce, the company that makes reactors for the UK Navy’s submarines. But the design criteria for a submarine reactor and for a power reactor are very different – while the experience of designing and manufacturing submarine reactors will have some general value in the civil sector, the design of a civil small modular reactor will need to be very different to a submarine reactor.

Rolls-Royce is one of five companies currently bidding for a role in a UK civil SMR programme. Its design has currently passed the second of three stages in the process of getting regulatory approval for the UK market. The Rolls-Royce proposal is for a 470 MWe pressurised water reactor, using conventional PWR fuel of low enrichment (in contrast to the very highly enriched fuel used in submarine reactors). The design is entirely new, though technically rather conservative.

A power output of 470 MWe is not, in fact, that small – this is very much in the range of reactor powers of civil PWRs that were being built in the early 1970’s – compare, for example, the VVER-440 reactors built by the USSR and widely installed and operating in the former USSR and Eastern Europe. The Rolls-Royce design, in contrast to the VVER-440s, does include the safety features to be found in the larger, recent PWR designs, including much more robust confinement, “core catcher”, and passive cooling to cope with a loss of coolant accident, and it will incorporate much more modern materials, control systems, and manufacturing technologies.

There have been suggestions that SMRs could be sited more widely across the country, in towns and cities outside regular nuclear sites. This isn’t the plan for any UK SMRs – they are in any case too large for this to make sense. Instead, the idea is to have multiple installations in existing licensed nuclear sites, such as Wylfa and Oldbury. The Rolls-Royce design is currently undergoing the final stage of its generic design approval. It is one of five potential vendors currently participating in a UK government competition for further support towards deployment of a light water small modular reactor in the UK.

The other entrants to the SMR competition are two well-established vendors of large light water reactors – Westinghouse and GE-Hitachi, and two more recent entrants into the market, from the USA – Holtec and NuScale. Since none of these companies has actually delivered an SMR, the decision will have to be made on judgements about capability: experience shows us that there can be no certainty about cost until one has been built. But, in making the decision, the UK government will need to decide how strongly to weight the need to rebuild UK industrial capacity and nuclear expertise against pure “value for money” criteria.

The Next Generation? Advanced Modular Reactors

The light water SMR represents an incremental update of a technology developed in the 1950’s, at a scale that was being widely deployed in the 1970’s. Is it possible to break out from the technological lock-in of the light water reactor, to explore more of the very wide possible design space of possible power reactors? That is the thinking behind the idea of developing an Advanced Modular Reactor – keeping the principle of relatively small scale and factory based modular construction, but using fundamentally different reactor designs, with different combinations of moderator and coolant to achieve technical advantage over the light water reactor. In particular, it would be very attractive to have a reactor that ran at a significantly higher temperature than a light water reactor. A high temperature reactor would have higher conversion efficiency to electrical power, and in addition it might be possible to use the heat directly to drive industrial processes – for example making hydrogen as an energy vector and as a non-oil based feedstock for the petrochemical industry, including to make synthetic hydrocarbons for zero carbon aviation fuel.

We are also seeing a resurgence of interest in reactors using unmoderated (fast) neutrons. This is partly motivated by the possibility of breeding fissile material, thus increasing the efficiency of fuel use, and partly by the fact that fast neutrons can induce fission in the higher actinides that are particularly problematic as contaminants of used nuclear fuel. There’s an attractive symmetry in the idea of using the UK’s very large stock of civil plutonium to “burn up” nuclear waste.

The UK government commissioned a technical assessment of potential candidates for an advanced modular reactor. This considered fast reactors cooled by liquid metals – both sodium and lead, as well as a gas-cooled fast reactor. Another intriguing possibility that has generated recent interest is the molten salt reactor, where the fissile material is dissolved in fluoride salts. Here the molten salt acts both as fuel and coolant. Reactor designs using a thermal neutron spectrum include an evolution of the boiling water reactor which uses water in the supercritical state. All of these designs have potential advantages, but the judgement of the study was that, of these potential designs, only the sodium fast reactor was potentially close enough to deployment to be worth considering.

However, the study made a clear recommendation in favour of a high temperature, gas cooled thermal neutron reactor. Here, the moderator is graphite and the coolant is helium, as in the Advanced Gas Cooled Reactors. The main difference with AGRs is that, in order to operate at higher temperatures, the fuel is presented in spherical particles around a millimetre in diameter, in which uranium oxide is coated with graphite and encapsulated in a high temperature resistant refractory ceramic such as silicon carbide. There is considerable world-wide experience in making this so-called tristructural isotropic (TRISO) fuel, which is able to withstand operating temperatures in the 700 – 850 °C range. Modifications of these fuel particles – for example using zirconium carbide as the outer later – could permit operation at even higher temperatures, high enough to split water into hydrogen and oxygen through purely thermochemical processes. But this would need further research.

A Chronicle of Wasted Time

What’s striking about many of the proposals for an advanced modular reactor is that the concepts are not new. For example, work on sodium cooled fast reactors began in the UK in the 1950s, with a full scale prototype being commissioned in 1974. Lead cooled reactors were built in both the USA and the USSR. Molten salt reactors perhaps represent the most radical design departure, but even here, a working prototype was developed in Oak Ridge National Laboratory, USA, in the 1960s.

One of the reasons for the UK AMR Technical Assessment favouring the High Temperature Gas Reactor is that it builds on the experience of the UK in running a fleet of gas cooled, graphite moderator reactors – the AGRs. In fact, the UK, as part of an international collaboration, operated a prototype high temperature gas reactor between 1964 and 1976 – DRAGON. It was in this project that the TRISO fuel concept was developed, which has since been used in operational high temperature gas reactors in the USA, Germany, Japan and China.

At the peak of the 1970’s energy crisis, from 1974 to 1976, construction began on more than a hundred nuclear reactors across the world. Enthusiasm for nuclear power dwindled throughout the 1980’s, suppressed on the one hand by the experience of nuclear accidents at Three Mile Island and Chernobyl, and on the other by an era of cheap and abundant fossil fuels. In the three years between 1994 to 1996, just three new reactors were begun worldwide. In this climate, there was no appetite for new approaches to nuclear power generation, technology development stagnated, and much tacit knowledge was lost.

Some concluding thoughts

In 1989, the UK’s Prime Minister Margaret Thatcher made an important speech to the United Nations highlighting the importance of climate change. It was her proposal that the work of the Intergovernmental Panel on Climate Change was extended beyond 1992, and that there should be binding protocols on the reduction of greenhouse gases; naturally, given her political perspective, she stressed the importance of generating continued economic growth, and of the importance of private sector industry in driving innovation. She reasserted her support for nuclear power, which she described as “the most environmentally safe form of energy”. As far as the UK was concerned, “we shall be looking more closely at the role of non-fossil fuel sources, including nuclear, in generating energy.”

Since Thatcher’s speech, another thousand billion tonnes of carbon dioxide have been released into the atmosphere from industry and burning fossil fuels, leading to an increase in the atmospheric concentration of CO2 from 350 parts per million in 1989 to 427 ppm now. To be fair, one should recognise that the worldwide nuclear power industry has produced 390,000 tonnes of spent nuclear fuel, producing 29,000 cubic meters of high level waste. This needs to be permanently disposed of in deep geological repositories, the first of which is nearing completion in Finland.

But even as Thatcher was speaking, the expansion of nuclear power was stalling. In the UK it was Thatcher’s own Chancellor of the Exchequer who had in effect killed nuclear power, through the lasting impact of his ideological commitment to privatised energy markets in an environment of cheap fossil fuels.

To be clear, what killed the UK’s nuclear energy programme was not a wrong choice of reactor design; it was a combination of high interest rates and low fossil fuel prices, all in the context of a worldwide retreat from nuclear new build, with a strong anti-nuclear movement, driven by nuclear accidents in Three Mile Island and Chernobyl, by the (correctly) perceived connection between civil nuclear power and nuclear weapons programmes, and by the problem of nuclear waste. The circumstances of the UK were particularly helpful for a continued dependence on fossil fuels; the discovery of North Sea oil and gas gave the UK, now a net energy exporter, a 15 year holiday from having to worry about the geopolitics of energy dependence.

But, for industrial nations, security of access to adequate energy supplies has always been an issue of existential importance, too often driving conflict and war. The Ukrainian war has given us a salutary reminder of the importance of energy supplies to geopolitics. Energy is never just another commodity.

The effective termination of the UK’s civil nuclear programme in the 1990’s undoubtedly saved money in the short-term. That money could have been used for investment – future-proofing the UK’s infrastructure, in supporting R&D to create new technologies. Political choices meant that it wasn’t – this was a period of falling public and private investment – instead it supported consumption. But there were costs, in terms of losing capacity, in industry and the state. Technological regression is possible, and one could argue that this has happened in civil nuclear power. In the UK, we have felt the loss of that capacity now that policy has changed, very directly in the failure of the last decade’s new nuclear build. Energy decisions should never just be about money.

Revisiting the UK’s nuclear AGR programme: 2. What led to the AGR decision? On nuclear physics – and nuclear weapons

This is the second of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government. In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects, and in particular, the influence of an article by Dennis Henderson that was highly critical of the AGR decision. In this post, I go into some detail to try to understand why the decision was made.

According to Thomas Kelsey, writing in his article When Missions Fail: Lessons in “High Technology” from post-war Britain, the decision to choose the Advanced Gas Cooled reactor design for the UK’s second generation reactor programme was forced through by “state technocrats, hugely influential scientists and engineers from the technical branches of the civil service”; sceptics did exist, but they were isolated in different departmental silos, and unable to coordinate their positions to present a compelling counter view.

But why might the scientists and engineers have been so convinced that the AGR was the right way to go, rather than the rival US designed Pressurised Water Reactor, making what Henderson argued, in his highly influential article “Two British Errors: Their Probable Size and Some Possible Lessons”, was one of the UK government’s biggest policy errors? To go some way to answering that, it’s necessary to consider both physics and history.

Understanding the decision to choose advanced gas cooled reactors: the physics underlying nuclear reactor design choices

To start with the physics, what are the key materials that make up a fission reactor, and what influences the choice of materials?

Firstly, one needs a fissile material, which will undergo a chain reaction – a nucleus, that when struck by a neutron, will split, releasing energy, and emitting a handful of extra neutrons, that go on to cause more fission. The dominant fissile material in today’s civil nuclear programmes is Uranium-235, the minority isotope that makes up 0.72% of natural uranium (the rest of it being uranium-238, which is mildly radioactive but not fissile). To make reactor fuel, one generally needs to “enrich” the uranium, increasing the concentration of U-235, typically, for civil purposes, to a few percent. Enrichment is a complex technology inextricably connected with nuclear weapons – the enrichment needed to make weapons grade uranium is different in degree, not kind, from that needed for civil power. One also needs to consider how the fissile material – the nuclear fuel – is to be packaged in the reactor.

Secondly, one needs a moderator. The neutrons produced in fission reactions are going too fast to be efficient at inducing further fissions, so they need to be slowed down. (As I’ll discuss below, it is possible to have a reactor without moderation – a so-called fast-neutron reactor. But because of the lower absorption cross-section for fast neutrons, this needs to use a much higher fraction of fissile material – highly enriched uranium or plutonium).

In a normal reactor, the purpose of the moderator is to slow down the neutrons. Moderators need to be made of a light element which doesn’t absorb neutrons too much. The main candidates are carbon (in the form of graphite), hydrogen (in the form of water) or deuterium, the heavier isotope of hydrogen (in the form of water). Hydrogen absorbs neutrons more than deuterium does, so it’s less ideal as a moderator, but is obviously much cheaper.

Finally, one needs a coolant, which takes away the heat the fission reactor produces, so the heat can be extracted and converted to electricity in some kind of turbine. The choice here, in currently operating reactors, is between normal water, heavy water, and a non-reactive gas (either carbon dioxide or helium). Experimental designs use more exotic cooling materials like molten salts and liquid metals.

So the fundamental design choice for a reactor is the choice of moderator and coolant – which dictate, to some extent, the nature of the fuel. The variety of possible combinations of moderators and coolants means that the space of possible reactor designs is rather large, but only a handful from this choice of potentials technologies is in widespread use. The most common choice is to use ordinary water as both coolant and moderator – in so-called light water reactors (“light water” in contrast to “heavy water”, in which the normal hydrogen of ordinary water is replaced by hydrogen’s heavier isotope, deuterium). Light water is an excellent coolant, cheap, and convenient to use to drive a steam turbine to generate electricity. But it’s not a great moderator – it absorbs neutrons, so a light water reactor needs to use enriched uranium as fuel, and the core needs to be relatively small.

These weren’t problems for the original use of pressurised water reactors (PWRs, the most common type of light water reactor. The other variety, Boiling Water Reactors, similarly uses light water as both coolant and moderator, the difference with PWRs being that steam is generated directly in the reactor core rather than in a secondary circuit). These were designed to power submarines, in a military context where enriched uranium was readily available, and where a compact size is a great advantage. But it underlies the great weakness of light water reactors – their susceptibility to what’s known as a “loss of coolant accident”. The problem is that, if for some reason the flow of cooling water is stopped, even if the chain reaction is quickly shut down (and this isn’t difficult to do) the fuel produces so much heat through its radioactive decay that it can melt the fuel rods, as happened in Three Mile Island. What’s worse, the alloy that the fuel rod is clad in can react with hot steam to produce hydrogen, that can explode, as happened at Fukushima.

In contrast to light water, heavy water is an excellent moderator. Although deuterium and (normal) hydrogen are (nearly) chemically identical, the interaction of neutrons with their nuclei is very different – deuterium absorbs neutrons much less than hydrogen. Heavy water is just as good a coolant as light water, so a reactor with heavy water as both moderator and coolant can be run with unenriched uranium oxide as fuel. The tradeoff, then, is the ability to do without a uranium enrichment plant, at the cost having to use expensive and hard to make heavy water in large quantities. This is the basis of the Canadian CANDU design.

Another highly effective moderator is graphite (if it’s of sufficiently high purity). But being a solid, a separate coolant is needed. The UK’s Magnox stations used carbon dioxide as a coolant and natural, unenriched uranium metal as a fuel; it was a development of this design that formed the Advanced Gas Cooled Reactor (AGR), which used lightly enriched uranium oxide as a fuel. The use of gas rather than water as the coolant makes it possible to run the reactor at a higher temperature, which allows a more efficient conversion of heat to electricity, while the lower neutron absorption of the moderator and coolant than for light water means that the core is less compact.

Another approach is to use graphite as the moderator, but to use light water as the coolant. The use of light water reduces the neutron efficiency of the design, so the fuel needs to be lightly enriched. This is the basis of the Soviet Union’s RBMK reactor. This design is cheap to build, but it has a very ugly potential failure mode. If the cooling water starts to boil, the bubbles of steam absorb fewer neutrons than the water they replace, and this means the efficiency of the chain reaction can increase, leading to a catastrophic runaway loss of control of the fission reaction. This is what happened at Chernobyl, the world’s worst nuclear accident to date.

Understanding the decision to choose advanced gas cooled reactors: the history of the UK nuclear weapons programme, and its influence on the civil nuclear programme

In the beginning, the purpose of the UK’s nuclear programme was to produce nuclear weapons – and the same can be said of other nuclear nations, USA and USSR, France and China, India and Pakistan, Israel and North Korea. The physics of the fission reaction imposes real constraints on the space of possible reactor designs – but history sets a path-dependence to the way the technology evolved and developed, and this reflects the military origins of the technology.

A nuclear weapon relies on the rapid assembly of a critical mass of a highly fissile material. One possible material is uranium – but since it’s only the minority Uranium-235 isotope that is fissile, it’s necessary to separate this from the Uranium-238 that constitutes 99.28% of the metal as it is found in nature. The higher the degree of enrichment, the smaller the critical mass required; in practise, enrichments over 60% are needed for a weapon. There is an alternative – to use the wholly artificial element plutonium. The fissile isotope plutonium-239 is formed when uranium-238 absorbs a neutron, most conveniently in a fission reactor.

As the history of nuclear weapons is usually told, it is the physicists who are usually given the most prominent role. But there’s an argument that the crucial problems to be overcome were as much ones of chemical engineering as physics. There is no chemical difference between the two uranium isotopes that need to be separated, so any process needs to rely on physical properties that depend on the tiny difference in mass between the two isotopes. On the other hand, to obtain enough plutonium to build a weapon, one needs not just to irradiate uranium in a reactor, but then use chemical techniques to extract the plutonium from a highly radioactive fuel element.

In 1941, the wartime UK government had concluded, based on the work of the so-called MAUD committee, that nuclear weapons were feasible, and began an R&D project to develop them – codenamed “Tube Alloys”. In 1943 the UK nuclear weapons programme was essentially subsumed by the Manhattan Project, but it was always the intention that the UK would develop nuclear weapons itself when the war ended. The pre-1943 achievements of Tube Alloys are often overlooked in the light of the much larger US programme, but one feature of it is worth pointing out. The UK programme was led by the chemical giant ICI; this was resented by the academic physicists who had established the principles by which nuclear weapons would work. However, arguably it represented a realistic appraisal of where the practical difficulties of making a weapon would lie – in obtaining sufficient quantities of the fissile materials needed. Tube Alloys pursued an approach to uranium enrichment based on the slightly different mass-dependent diffusion rates of uranium hexafluoride through porous membranes. This relied on the expertise in fluorine chemistry developed by ICI in Runcorn in the 1930’s, and came to fruition with the establishment of a full-scale gaseous diffusion plant in Capenhurst, Cheshire, in the late 40s and early 50s.

After the war, the UK was cut off from the technology developed by the USA in the Manhattan project, with the 1946 McMahon Act formally prohibiting any transfer of knowledge or nuclear materials outside the USA. The political imperative for the UK to build its own nuclear weapon is summed up by the reported comments of Ernest Bevin, the Foreign Secretary in the postwar Labour government: “We’ve got to have this thing over here, whatever it costs. We’ve got to have the bloody Union Jack on top of it.”

But even before a formal decision to make a nuclear weapon was made, in 1947, the infrastructure for the UK’s own nuclear weapons programme had been put in place, reflecting the experience of the returning UK scientists who had worked on the Manhattan Project. The first decision was to build a nuclear reactor in the UK, to make plutonium. This reflected the experience of the Manhattan project, which had highlighted the potential of the plutonium route to a nuclear weapon.

To put it crudely, it turned out to be easier to make a bomb from highly enriched uranium than from plutonium, but it was easier to make plutonium than highly enriched uranium. The problem with the plutonium route to the bomb is that irradiating uranium-235 with neutrons produces not just the fissile isotope Plutonium-239, but trace amounts of another isotope, Plutonium-240. Plutonium-240 undergoes spontaneous fission, emitting neutrons. Because of this the simplest design of a nuclear weapon – the gun design used for the Hiroshima bomb – will not work for plutonium, as the spontaneous fission causes premature detonation and low explosive yields. This problem was solved by the development of the much more complex implosion design, but there are still hard limits on the levels of plutonium-240 that can be tolerated in weapons grade plutonium, and these impose constraints on the design of reactors used to produce it.

The two initial UK plutonium production reactors were built in Sellafield – the Windscale Piles. The fuel was natural, unenriched, uranium (necessarily, because the uranium enrichment plant in Capenhurst had not yet been built), so this dictated the use of a graphite moderator. The reactors were air-cooled. The first reactor started operations in 1951, with the first plutonium produced in early 1952, enabling the UK’s first, successful, nuclear weapon test in October 1952.

But even as the UK’s first atom bomb test was successful, it was clear that the number of weapons the UK’s defense establishment was calling for would demand more plutonium than the Windscale piles could produce. At the same time, there was growing interest in using nuclear energy to generate electricity, at a time when coal was expensive and in short supply, and oil had to be imported and paid for with scarce US dollars. The decision was made to combine the two goals, with second generation plutonium producing reactors also producing power. The design would use graphite moderation, as in the Windscale piles, and natural uranium as a fuel, but rather than being air-cooled, the coolant was high pressure carbon dioxide. The exclusion of air made it possible to use a magnesium alloy as the casing for the fuel, which absorbed fewer neutrons than the aluminium used before.

The first of this new generation of dual purpose reactors – at Calder Hall, near Sellafield – was opened in 1956, just four years after the decision to build it. Ultimately four reactors of this design were produced – two at Calder Hall, and two at Chapelcross in Scotland. It’s important to stress that, although these reactors did supply power to the grid, they were optimised to produce plutonium for nuclear weapons, not to produce electricity efficiently. The key feature that this requirement dictated was the need to remove the fuel rods while the reactor was running; for weapons grade plutonium the exposure of uranium-238 to neutrons needs to be limited, to keep the level of undesirable plutonium 240 low. From the point of view of power production, this is sub-optimal, as it significantly lowers the effective fuel efficiency of the reactor; it also produces significantly greater quantities of nuclear waste.

The first generation of UK power reactors – the Magnox power stations – were an evolution of this design. Unlike Calder Hall and Chapelcross, they were under control of the Central Electricity Generating Board, rather than the Atomic Energy Authority, and were run primarily to generate electricity rather than weapons grade plutonium, using longer burn up times that produced plutonium with high concentrations of Pu-240. This so-called “civil plutonium” was separated from the irradiated fuel – there is now a stockpile of about 130 tonnes of this. Did the civil Magnox reactors produce any weapons grade plutonium? I don’t know, but I believe that there is no technical reason that would have prevented that.

Fast neutron reactors and the breeder dream

A reactor that doesn’t have a moderator is known as a fast-neutron reactor. This uses neutrons at the energy they have when emitted from the fission reaction, without slowing them down in a moderator. As mentioned above, the probability of a fast neutron colliding with a fissile nucleus is smaller than for a slow neutron, so this means that a fast-neutron reactor needs to use a fuel with a high proportion of fissile isotopes – either uranium highly enriched in U-235, or plutonium (both need to be in the form of the oxide, so the fuel doesn’t melt). In the absence of a moderator, the core of a fast neutron reactor is rather small, producing a lot of heat in a very small volume. This means that neither water nor gas is good enough as a coolant – fast neutron reactors to date have instead used liquid metal, most commonly molten sodium. As one might imagine, this poses considerable engineering problems.

But fast-neutron reactors have one remarkable advantage which has made many countries persist with a fast-neutron reactor programme, despite the difficulties. A fission reaction prompted by a fast neutron produces, on average, more additional neutrons than fission prompted by a slow neutron. This means that a fast-neutron reactor can produce more neutrons than are needed to maintain the chain reaction, and these additional neutrons can be used to “breed” additional fissile material. In effect, a fast-neutron reactor can produce more reactor fuel than it consumes, for example by converting non-fissile uranium-238 into fissile plutonium-239, or converting non-fissile thorium-232 into another fissile isotope of uranium, uranium-233.

In the 1940s and 50s, the availability of uranium relative to the demand of weapons programmes was severely limited, so the prospect of extracting energy from the much more abundant U-238 isotope was very attractive. Design studies for a UK fast neutron reactor started as early as 1951, with the strong backing of Christopher Hinton, the hard-driving ex-ICI engineer who ran the UK’s nuclear programme. An experimental fast reactor was built at Dounreay, in Caithness, which was completed by 1959. Using this experience, it was decided in 1966 to build a prototype fast power reactor, cooled with liquid sodium, with a 250 MW design electrical output.

The worldwide expansion of nuclear power in the 1970s seemed to strengthen the case for a breeder reaction even further, so the commissioning of the prototype fast reactor in 1974 seemed timely. However, in common with the experience of fast reactors elsewhere in the world, reliability was a problem, and the Dounreay reactor never achieved even 50% of its design output. Moreover, following the 1979 Three Mile Island accident, the worldwide expansion of nuclear power stalled, and the price of Uranium collapsed, undercutting the economic rationale for breeder reactors.

The winding down of the UK’s experiment with fast breeders was announced in Parliament in 1988: “The Government have carried out a review of the programme in the light of the expectation that commercial deployment of fast reactors in the United Kingdom will not now be required for 30 to 40 years. Our overall aim in the review has been to retain a position in the technology for the United Kingdom at economic cost.” Operations on the Dounreay prototype fast breeder came to an end in 1994, and in effect the UK’s position in the technology was lost. In the UK, as elsewhere in the world, the liquid metal cooled fast neutron breeder reactor proved a technological dead-end, where it remains – for now.

Submarines

Bombs are not the only military application of nuclear energy. Even before the 2nd World War ended, it was appreciated that a nuclear reactor would be an ideal power source for a submarine. Diesel-electric submarines need to surface frequently to run their engines and recharge their batteries; a submarine with a long-term power source that didn’t need oxygen, able to remain underwater for months on end, would be transformational for naval warfare. In the UK, work on a naval reactor began in the early 1950’s, and the UK’s first nuclear powered submarine, HMS Dreadnought, was launched in 1960. But HMS Dreadnought didn’t use UK nuclear technology; instead it was powered by a reactor of US design, a pressurised water reactor, using light water both as moderator and as coolant.

The father of the US nuclear navy was an abrasive and driven figure, Admiral Rickover. Rickover ran the US Navy’s project to develop a nuclear submarine, initially working at Oak Ridge National Laboratory in the late 1940’s. He selected two potential reactor designs – the pressurised water reactor devised by the physicist Alvin Weinberg, and a liquid sodium cooled, beryllium moderated reactor. Both were developed to the point of implementation, but it was the PWR that was regarded as the best (and particularly, the most reliable) design, and has been subsequently used for all Western nuclear submarines.

The prototype reactor went critical at a land-based test installation in 1953. At this time the first submarine was already under construction; the USS Nautilus went to sea only two years later, in 1955. The UK’s effort lagged considerably behind. In 1958, following the thawing of nuclear relations between the UK and the USA, Admiral Rickover offered the UK a complete nuclear propulsion system. It seems that this deal was sealed entirely on the basis of the personal relationship between Rickover and the UK’s Admiral of the Fleet, Lord Mountbatten. It came with two conditions. The first was that it should be a company to company deal, between the US contractor Westinghouse and the UK firm Rolls-Royce, rather than a government to government agreement. The second was that it was a one-off – Rolls-Royce would have a license to the Westinghouse design for a pressurised water reactor, but after that the UK was on its own. These two conditions have meant that there has been a certain separation between the UK’s naval reactor programme, as Rolls-Royce has developed further iterations of the naval PWR design, and the rest of its national nuclear enterprise.

Rickover’s rapid success in creating a working power reactor for submarines had far-reaching consequences for civil nuclear power. President Eisenhower’s 1953 “Atoms for Peace” speech committed the USA to developing civilian applications, and the quickest way to deliver on that was to build a nuclear power station building on the submarine work. Shippingport opened in 1957 – it was essentially a naval reactor repurposed to power a static power station, and was wholly uneconomic as an energy source, but it launched Westinghouse’s position as a supplier of civil nuclear power plants. Pressurised water reactors designed at the outset for civil use would evolve in a different direction to submarine reactors. For a submarine, reactors need to be highly compact, self-contained, and should be able to go for long periods without being refuelled, all of which dictates the use of highly enriched – essentially weapons grade – uranium. In civil use, to have any chance of being economic, uranium at much lower enrichment levels must be used, but designs can be physically bigger, and refuelling can be more frequent. By the 1960’s, Westinghouse was able to export civil PWRs to countries like Belgium and France, and it was a descendant of this design that was built in the UK at Sizewell B.

Imagined futures, alternative histories, and technological lock-in

The path of technological progress isn’t preordained, but instead finds a route through a garden of forking paths, where at each branch point the choice is constrained by previous decisions, and is influenced by uncertain guesses about where each of the different paths might lead.

So it’s a profound mistake to suppose that in choosing between different technological approaches to nuclear power, it is simply a question of choosing between a menu of different options. The choice depends on history – a chain of previous choices which have established which potential technological paths have been pursued and which ones have been neglected. It’s this that establishes what comprises the base of technological capability and underpinning knowledge – both codified and tacit – that will be exploited in the new technology. It depends on the existence of a wider infrastructure. A national nuclear programme comprises a system, which could include uranium enrichment facilities, fuel manufacturing, plutonium separation and other waste handling facilities – and, as we’ve seen, the scope of that system depends not just on a nation’s ambitions for civil nuclear power, but on its military ambitions and its weapons programme. And it depends on visions of the future.

In the early years of the Cold War, those visions were driven by paranoia, and a not unjustified fear of apocalypse. The McMahon act of 1946 had shut the UK out of any collaboration on nuclear weapons with the USA; the Soviet Union had demonstrated an atom bomb in 1949, following up in 1955 with a thermonuclear weapon in the megaton range. The architects of the UK nuclear programme – the engineer Christopher Hinton, and physicists William Penney and John Cockcroft, drove it forward with huge urgency. Achievements like delivering Calder Hall in just 4 years were remarkable – but achieved at the cost of cut corners and the accumulation of massive technical debt. We are still living with the legacy of that time – for example, in the ongoing, hugely expensive, clean-up of the nuclear waste left over in Sellafield from that period.

Energy worries dominated the 1970s, nationally and internationally. Conflicts in the Middle East led to an oil embargo and a major spike in the price of oil. The effect of this was felt particularly strongly in the USA, where domestic oil production had peaked in 1970, giving rise to fundamental worries about the worldwide exhaustion of fossil fuels. In the UK, industrial action in the coal mining industry led to rolling power cuts and a national three day week; the sense of national chaos leading to the fall of the Heath government. Fuel prices of all kinds – oil, coal and gas – seemed to be inexorably rising. For energy importers – and the UK was still importing around half its energy in the early 1970’s – security of energy supplies suddenly seemed fragile. In this environment, there was a wide consensus that the future of energy was nuclear, with major buildouts of nuclear power carried out in France, Germany, Japan and the USA.

By the 1990s, things looked very different. In the UK, the exploitation of North Sea oil and gas had turned the UK from an energy importer to an energy exporter. All aspects of fossil fuel energy generation and distribution had been privatised. In this world of apparent energy abundance, energy was just another commodity whose supply could safely be left to the market. And in an environment of high interest rates and low fuel prices, there was no place in the market for nuclear energy.

But if decisions about the technological directions are driven by visions of the future, they are constrained by the past. What is possible is determined by the infrastructure that’s been built already – uranium enrichment plants, reprocessing facilities, and so on. The nature of the stock of knowledge acquired in past R&D programmes will be determined by the problems that had emerged during those programmes, so starting work on a different class of reactors would render that knowledge less useful and necessitate new, expensive programmes of research. The skills and expertise that have been developed in past programmes – whether that is in the understanding of reactor physics that is needed to run them efficiently, or in the construction and manufacturing techniques to build them cheaply effectively – will be specific to the particular technologies that have been implemented in the past.

All this contributes to what is called “technological lock-in”. It isn’t obvious that the first class of power reactor ever developed – the pressurised water reactor – must be the optimum design, out of the large space of possible reactor types, particularly as it was originally designed for a different application – powering submarines – to the one it ended up being widely implemented for – generating power in static, civil power stations.

The UK’s decision to choose the Advanced Gas Cooled Reactor

So why did the UK’s state technocrats make the decision to roll out Advanced Gas Cooled reactors – and having made that decision, why did it take so long to reverse it? The straightforward answer is that this was another case of technological lock-in – the UK had developed an expertise in gas-cooled reactors which was genuinely world-leading, as a result of its decision in the Magnox programme to merge the goals of generating electricity and producing military plutonium. I believe there was a real conviction that the gas-cooled reactor was technically superior to the light-water designs, coupled with a degree of pride that this was an area that the UK had led in. As a UKAEA expert on gas-cooled reactors wrote in 1983, “Few other countries had the skills or resources to pioneer [gas-cooled reactors]; the easy option of the light water reactor developed by someone else has been irresistible”.

There were specific reasons to favour the AGR over PWRs – in particular, in the UK programmes there were worries about the safety of PWRs. These were particularly forcefully expressed by Sir Alan Cottrell, an expert on metallurgy and its applications in the nuclear industry, who was government Chief Scientific Advisor between 1971 and 1974. Perhaps, after Three Mile Island and Fukushima, one might wonder whether these worries were not entirely misplaced.

Later in the programme, while there may have been some agreement from its proponents that the early AGR building programme hadn’t gone well, there was a view that the teething problems had been more or less ironed out. I haven’t managed to find an authoritative figure for the final cost of the later AGR builds, but in 1980 it was reported in parliament that Torness was on track to be delivered at a budget of £1.1 bn (1980 prices), which is not a great deal different from the final cost of the Sizewell B PWR. Torness, like Sizewell B, took 8 years to build.

But I wonder whether the biggest factor in the UK’s nuclear establishment’s preference for the AGR over the PWR was from a sense that the AGR represented another step on a continuing path of technological progress, while the PWR was a mature technology whose future was likely to consist simply of incremental improvements. Beyond the AGRs, the UK’s nuclear technologists could look to the next generation of high temperature reactors, whose prototype – Dragon, at Winfrith – was already in operation, with the fast breeder reactor promising effectively unlimited fuel for a nuclear powered future. But that future was foreclosed by the final run-down of the UK’s nuclear programme in the 80s and 90s, driven by the logic of energy privatisation and cheap North Sea gas.

In the third and final part of this series, I will consider how this history has constrained the UK’s faltering post 2008 effort to revive a nuclear power industry, and what the future might hold.

Sources

For the history of the UK’s nuclear programme, both civil and military, I have relied heavily on: An Atomic Empire: A Technical History Of The Rise And Fall Of The British Atomic Energy Programme, by Charles Hill (2013)

Churchill’s Bomb, by Graham Farmelo (2013) is very illuminating on the early history of the UK’s atomic weapons programme, and on the troubled post-war nuclear relationship between the UK and USA.

On the technical details of nuclear reactors, Nuclear power technology. Volume 1. Reactor technology, edited by Walter Marshall (OUP, 1983) is still very clear. Marshall was Chair of the UK Atomic Energy Authority, then Chief Executive of the Central Electricity Generating Board, and most of the contributors worked for the UKAEA, so in addition to its technical value, the tone of the book gives some flavour of the prevailing opinion in the UK nuclear industry at the time.

On Sir Alan Cottrell’s opposition to PWRs on safety grounds, see his biographical memoir. This also provides an interesting glimpse at how intimately linked the worlds of academia, government scientific advice, and the UK’s nuclear programme (with the occasional incursion by Royalty) were in the 1960s and 70s.

Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

Deep decarbonisation is still a huge challenge

In 2019 I wrote a blogpost called The challenge of deep decarbonisation, stressing the scale of the economic and technological transition implied by a transition to net zero by 2050. I think the piece bears re-reading, but I wanted to update the numbers to see how much progress we had made in 4 years (the piece used the statistics for 2018; the most up-to-date current figures are for 2022). Of course, in the intervening four years we have had a pandemic and global energy price spike.

The headline figure is that the fossil fuel share of our primary consumption has fallen, but not by much. In 2018, 79.8% of our energy came from oil, gas and coal. In 2022, this share was 77.8%.

There is good news – if we look solely at electrical power generation, generation from hydro, wind and solar was up 32% 2018-2022, from 75 TWh to 99 TWh. Now 30.5% of our electricity production comes from renewables (excluding biomass, which I will come to later).

The less good news is that electrical power generation from nuclear is down 27%, from 65 TWh to 48 TWh, and this now represents just 14.7% of our electricity production. The increase in wind & solar is a real achievement – but it is largely offset by the decline in nuclear power production. This is the entirely predictable result of the AGR fleet reaching the end of its life, and the slow-motion debacle of the new nuclear build program.

The UK had 5.9 GW of nominal nuclear generation capacity in 2022. Of this, all but Sizewell B (1.2 GW) will close by 2030. In the early 2010’s, 17 GW of new nuclear capacity was planned – with the potential to produce more than 140 TWh per year. But, of these ambitious plans, the only project that is currently proceeding is Hinkley Point, late and over budget. The best we can hope for is that in 2030 we’ll have Hinkley’s 3.2 GW, which together with Sizewell B’s continuing operation could produce at best 38 TWh a year.

In 2022, another 36 TWh of electrical power – 11% – came from thermal renewables – largely burning imported wood chips. This supports a claim that more than half (56%) of our electricity is currently low carbon. It’s not clear, though, that imported biomass is truly sustainable or scaleable.

It’s easy to focus on electrical power generation. But – and this can’t be stressed too much – most of the energy we use is in the form of directly burnt gas (to heat our homes) and oil (to propel our cars and lorries).

The total primary energy we used in 2022 was 2055 TWh; and of this 1600 TWh was oil, gas and coal. 280 TWh (mostly gas) was converted into electricity (to produce 133 TWh of electricity), and 60 TWh’s worth of fossil fuel (mostly oil) was diverted into non-energy uses – mostly feedstocks for the petrochemical industry – leaving 1260 TWh to be directly burnt.

To achieve our net-zero target, we need to stop burning gas and oil, and instead use electricity. This implies a considerable increase in the amount of electricity we generate – and this increase all needs to come from low-carbon sources. There is good news, though – thanks to the second law of thermodynamics, we can convert electricity more efficiently into useful work than we can by burning fuels. So the increase in electrical generation capacity in principle can be a lot less than this 1260 TWh per year.

Projecting energy demand into the future is uncertain. On the one hand, we can rely on continuing improvements in energy efficiency from incremental technological advances; on the other, new demands on electrical power are likely to emerge (the huge energy hunger of the data centres needed to implement artificial intelligence being one example). To illustrate the scale of the problem, let’s consider the orders of magnitude involved in converting the current major uses of directly burnt fossil fuels to electrical power.

In 2022, 554 TWh of oil were used, in the form of petrol and diesel, to propel our cars and lorries. We do use some electricity directly for transport – currently just 8.4 TWh. A little of this is for trains (and, of course, we should long ago have electrified all intercity and suburban lines), but the biggest growth is for battery electrical vehicles. Internal combustion engines are heat engines, whose efficiency is limited by Carnot, whereas electric motors can in principle convert all inputted electrical energy into useful work. Very roughly, to replace the energy demands of current cars and lorries with electric vehicles would need another 165 TWh/year of electrical power.

The other major application of directly burnt fossil fuels is for heating houses and offices. This used 334 TWh/year in 2022, mostly in the form of natural gas. It’s increasingly clear that the most effective way of decarbonising this sector is through the installation of heat pumps. A heat pump is essentially a refrigerator run backwards, cooling the outside air or ground, and heating up the interior. Here the second law of thermodynamics is on our side; one ends up with more heat out than energy put in, because rather than directly converting electricity into heat, one is using it to move heat from one place to another.

Using a reasonable guess for the attainable, seasonally adjusted “coefficient of performance” for heat pumps, one might be able to achieve the same heating effect as we currently get from gas boilers with another 100 TWh of low carbon electricity. This figure could be substantially reduced if we had a serious programme of insulating old houses and commercial buildings, and were serious about imposing modern energy efficiency standards for new ones.

So, as an order of magnitude, we probably need to roughly double our current electricity generation capacity from its current value of 320 TWh/year, to more than 600 TWh/year. This will take big increases in generation from wind and solar, currently running around 100 TWh/year. In addition to intermittent renewables, we need a significant fraction of firm power, which can always be relied on, whatever the state of wind and sunshine. Nuclear would be my favoured source for this, so that would need a big increase from the 40 TWh/year we’ll have in place by 2030. The alternative would be to continue to generate electricity from gas, but to capture and store the carbon dioxide produce. For why I think this is less desirable for power generation (though possibly necessary for some industrial processes), see my earlier piece: Carbon Capture and Storage: technically possible, but politically and economically a bad idea.

Industrial uses of energy, which currently amount to 266 TWh, are a mix of gas, electricity and some oil. Some of these applications (e.g. making cement and fertiliser) are going to be rather hard to electrify, so, in addition to requiring carbon capture and storage, this may provide a demand for hydrogen, produced from renewable electricity, or conceivably process heat from high temperature nuclear reactors.

It’s also important to remember that a true reckoning of our national contribution to climate change would include taking account of the carbon dioxide produced in the goods and commodities we import, and our share of air travel. This is very significant, though hard to quantify – in my 2019 piece, I estimated that this could add as much as 60% to our personal carbon budget.

To conclude, we know what we have to do:

  • Electrify everything we can (heat pumps for houses, electric cars), and reduce demand where possible (especially by insulating houses and offices);
  • Use green hydrogen for energy intensive industry & hard to electrify sectors;
  • Hugely increase zero carbon electrical generation, through a mix of wind, solar and nuclear.

In each case, we’re going to need innovation, focused on reducing cost and increasing scale.

There’s a long way to go!

All figures are taken from the UK Government’s Digest of UK Energy Statistics, with some simplification and rounding.

2022 Books roundup

2022 was a thoroughly depressing year; here are some of the books I’ve read that have helped me (I hope) to put last year’s world events in some kind of context.

Helen Thompson could not have been luckier – or, perhaps, more farsighted – in the timing of her book’s release. Disorder: hard times in the 21st century is a survey of the continuing influence of fossil fuel energy on geopolitics, so couldn’t be more timely, given the impact of Russia’s invasion of Ukraine on natural gas and oil supplies to Western Europe and beyond. The importance of securing national energy supplies runs through history of the world in the 20th century in both peace and war; we continue to see examples of the deeply grubby political entanglements the need for oil has drawn Western powers into. All this, by the way, provides a strong secondary argument, beyond climate change, for accelerating the transition to low carbon energy sources.

The presence of large reserves of oil in a country isn’t an unmixed blessing – we’re growing more familiar with the idea of a “resource curse”, blighting both the politics and long term economic prospects of countries whose economies depend on exploiting natural resources. Alexander Etkind’s book Natures Evil: a cultural history of natural resources is a deep history of how the materials we rely on shape political economies. It has a Eurasian perspective that is very timely, but less familiar to me, and takes the idea of a resource curse much further back in time, covering furs and peat as well as the more familiar story of oil.

With more attention starting to focus on the world’s other potential geopolitical flashpoint – the Taiwan Straits – Chris Miller’s Chip War: the fight for the world’s most critical technology – is a great explanation of why Taiwan, through the semiconductor company TSMC, came to be so central to the world’s economy. This book – which has rightly won glowing reviews – is a history of the ubiquitous chip – the silicon integrated circuits that make up the memory and microprocessor chips at the heart of computers, mobile phones – and, increasingly, all kinds of other durable goods, including cars. The focus of the book is on business history, but it doesn’t shy away from the crucial technical details – the manufacturing processes and the tools that enable them, notably the development of extreme UV lithography and the rise of the Dutch company ASML. Excellent though the book is, its business focus did make me reflect that (as far as I’m aware) there’s a huge gap in the market for a popular science book explaining how these remarkable technologies all work – and perhaps speculating on what might come next.

Slouching to Utopia: an economic history of the 20th century, by Brad DeLong, is an elegy for a period of unparalleled technological advance and economic growth that seems, in the last decade, to have come to an end. For DeLong, it was the development of the industrial R&D laboratory towards the end of the 19th century that launched a long century, from 1870-2010, of unparalleled growth in material prosperity. The focus is on political economy, rather than the material and technological basis of growth (for the latter, Vaclav Smil’s pair of books Creating the Twentieth Century and Transforming the Twentieth Century are essential). But there is a welcome focus on the material substrate of information and communication technology rather than the more visible world of software (in contrast, for example, to Robert Gordon’s book The Rise and Fall of American Growth, which I reviewed rather critically here).

Though I am very sympathetic to many of the arguments in the book, ultimately it left me somewhat disappointed. Having rightly stressed the importance of industrial R&D as the driver of the technological change, this theme was not really strongly developed, with little discussion of the changing institutional landscape of innovation around the world. I also wish the book had a more rigorous editor – the prose lapses on occasion into self-indulgence and the book would have been better had it been a third shorter.

In contrast, Vaclav Smil’s latest book – How the World Really Works: A Scientist’s Guide to Our Past, Present and Future – clearly had an excellent editor. It’s a very compelling summary of a couple of decades of Smil’s prolific output. It’s not a boast about my own learning to say that I knew pretty much everything in this book before I read it; simply a consequence of having read so many of Smil’s previous, more academic books. The core of Smil’s argument is to stress, through quantification, how much we depend on fossil fuels, for energy, for food (through the Haber-Bosch process), and for the basic materials that underlie our world – ammonia, plastics, concrete and steel. These chapters are great, forceful, data-heavy and succinct, though the chapter on risk is less convincing.

Despite the editor, Smil’s own voice comes through strongly, sceptical, occasionally curmudgeonly, laying out the facts, but prone to occasional outbreaks of scathing judgement (he really dislikes SUVs!). Perhaps he overdoes the pessimism about the speed with which new technology can be introduced, but his message about the scale and the wrenching impact of the transition we need to go through, to move away from our fossil fuel economy, is a vital one.

From self-stratifying films to levelling up: A random walk through polymer physics and science policy

After more than two and a half years at the University of Manchester, last week I finally got round to giving an in-person inaugural lecture, which is now available to watch on Youtube. The abstract follows:

How could you make a paint-on solar cell? How could you propel a nanobot? Should the public worry about the world being consumed by “grey goo”, as portrayed by the most futuristic visions of nanotechnology? Is the highly unbalanced regional economy of the UK connected to the very uneven distribution of government R&D funding?

In this lecture I will attempt to draw together some themes both from my career as an experimental polymer physicist, and from my attempts to influence national science and innovation policy. From polymer physics, I’ll discuss the way phase separation in thin polymer films is affected by the presence of surfaces and interfaces, and how in some circumstances this can result in films that “self-stratify” – spontaneously separating into two layers, a favourable morphology for an organic solar cell. I’ll recall the public controversies around nanotechnology in the 2000s. There were some interesting scientific misconceptions underlying these debates, and addressing these suggested some new scientific directions, such as the discovery of new mechanisms for self-propelling nano- and micro- scale particles in fluids. Finally, I will cover some issues around the economics of innovation and the UK’s current problems of stagnant productivity and regional inequality, reflecting on my experience as a scientist attempting to influence national political debates.

Lessons from the gas price spike

On April 1st this year, the average UK household will see its annual energy bills rise from £1,277 to around £2,000 a year, according to the Resolution Foundation. After 10 years of stagnant wages – this itself a result of the ongoing productivity growth slowdown, there’s a clamour for some kind of short term fix for a potential political crisis, made worse by a forthcoming tax rise. Even more ominously, an unfolding geopolitical crisis over a conflict between Russia and Ukraine may interact with this energy crisis in a potentially far-reaching way, as we shall see.


UK gas and electricity spot prices (monthly rolling average of “day-ahead” prices). Data: OFGEM

My first plot shows the scale of the crisis. This shows the wholesale, spot prices of gas and electricity since 2010. I don’t want to dwell here on the dysfunctional features of the UK’s retail energy market that have led to the failure of a number of suppliers, or to look at the short-term issues that have exacerbated a current supply squeeze. Instead, it’s worth looking at the longer term implications for the UK’s energy security of this episode of market disruption, and to try to understand how we have been led to this state by global changes in energy markets and UK policy decisions over decades.

Natural gas matters existentially for the UK’s economy, because 40% of the UK’s demand for energy is met by gas, and without sufficient supplies of energy, a modern economy and society cannot function. The price of electricity is strongly coupled to the price of gas, because 34% of our electricity (in 2020) was generated in gas-fired power stations, compared to 15% from nuclear and 23% wind. But generating electricity only accounts for 29% of our total demand for gas. The biggest fraction – 37% – is used for heating our houses, with another 12% is directly burnt in industry, to make fertiliser, cement and in many other processes.

To understand why the wholesale price of gas matters so much, we need to understand a couple of ways in which the UK’s energy landscape has changed in the last twenty years. The first – the UK’s own balance between production and consumption – is shown in the next plot. Since 2004, the UK has gone from being self-sufficient in gas to being a substantial importer. Production of North Sea gas – like North Sea oil – peaked in the early 2000s, and has since rapidly dropped off, as the gas fields most easily and cheaply exploited have been exhausted.


Gas production and consumption in the UK. Data: Digest of UK Energy Statistics 2021, table 4.1.

The second consideration is the nature of the international gas market. A few decades ago, natural gas was a commodity that was used close to where it was produced – it could not be traded globally. But since then an infrastructure has been developed to transport natural gas over long distances; a network of intercontinental pipelines have been built, so gas produced, for example, in Arctic Siberia can be transported to markets in Western Europe. And the technology for shipping liquified natural gas in bulk has been developed, allowing gas from the huge fields in Qatar and Australia, and from the USA’s shale gas industry, to be taken to terminals across the world. This means that a worldwide gas market has been developed, tending to equalise prices across the world. A liquified natural gas tanker can leave Qatar, the USA or Australia and choose to take its cargo to wherever the price it can fetch is highest.

The combination of the UK’s dependency on gas imports means that the prices UK households and industry have to pay for energy reflect supply and demand on a global scale. My next plot shows how global demand has changed over the last couple of decades. The UK’s demand has held steady – the UK’s “dash for gas” represented an early energy transition from extensive use of coal to natural gas. This was a positive change that has reduced the UK’s emissions of greenhouse gases. Now other countries are following in the UK’s footsteps – again, a positive development for overall world greenhouse gas emissions, but putting huge upward pressure on gas supplies. This stresses that the UK is a minor player in world gas markets; its consumption accounts for about 2% of world demand.


World gas consumption by continent, together with China and UK. Data: US Energy Information Administration

Where is this gas coming from? The largest net exporter, as shown in my next plot, is Russia. There’s an ominous echo of the 1970’s and its linked energy, economic and political crises, as dominant energy suppliers realise that withholding energy exports can be a powerful weapon in geopolitical conflicts. As it happens, the UK’s gas imports come primarily from Norway, by pipeline, and Qatar, through LNG imports by ship. But this doesn’t mean that the UK won’t be affected if Russia chooses to exert pressure on Europe by throttling back gas exports. There’s a global market – if Russia cuts off supplies to Germany and Central Europe, Germany will seek to replace that by buying gas from Norway and on the world LNG market, and the prices the UK has to pay will rocket.


Top gas net exporters (i.e. exports less imports).Data: US Energy Information Agency

What should the UK do about this energy crisis?

We can discount straight away the suggestion made by veteran Thatcherite and Eurosceptic MP, Sir John Redwood, that the UK should simply produce more gas of its own. The UK is a small-scale participant in a global market. Even doubling its gas production would make no impact on the global balance of supply and demand, so prices would be unaffected. It’s true that if the gas was produced by a government-owned organisation, the rent – the difference between the market price and cost of production – would be captured by the UK state rather than having to be handed over to the governments of major exporters like Qatar, Norway and Russia. But British Gas was privatised in 1986.

The reason the UK ran down its production was that governments in the 1980’s made a conscious decision that energy should be left to the market, and the market said that it was cheaper to import gas than to produce it from the North Sea (and even more so than to develop a fracking industry in Sussex and the rural Pennines). One can’t help getting the impression that UK politicians like John Redwood are in revolt against the consequences of the national economic settlement that they themselves created.

In fact, there is nothing fundamental the UK can do now apart from strengthen the social safety net for the poorest households, accepting the pressure to increase taxes this leads to. Less politically visible, but nonetheless important, is the pressure high gas costs will put on energy-using industries. The reality is that, as a net importer of energy, higher gas prices inevitably lead to a real loss of national income. Energy infrastructures take many years to build, so all we can do now is look back at the things the UK should have done a decade ago, and learn from those mistakes so that we are in a better position a decade on from now.

What the UK should have done is to reduce the demand for gas through an aggressive pursuit of energy efficiency measures, and to increase the diversity of its energy sources by accelerating the development of other forms of (low-carbon) electricity generation. It failed on both fronts.

In 2013, the Coalition government reduced spending on energy efficiency measures as part of a campaign to “cut the green crap”; the result was a precipitous drop in measures such as cavity wall insulation and loft insulation. In 2015, the zero-carbon homes standard was scrapped, with the result that new housing was built to lower standards of energy efficiency. Recall that 37% of the UK’s gas demand is for domestic heating, so the UK’s poor standards of home energy efficiency translate directly into increased demand – and, with the current high prices, higher bills for consumers. “Cutting the green crap” turned out to be a costly mistake.

It is true that the UK has brought on-stream a significant amount of offshore wind capacity. However, too much of this capacity has been offset by the decline of the UK’s existing nuclear fleet, now approaching the end of its life. The UK government has committed to a programme of nuclear new build, but this programme has stalled. In 2013, I wrote that the nuclear new build programme was “too expensive, too late”, and everything that has happened since has born that diagnosis out.

There’s a more general lesson to learn from the current gas price spike. For some decades, the fundamental underpinning of the UK’s energy policy is that the market should be left to find the cheapest way of delivering the energy the nation needs. In the last decade, the government has intervened extensively in that market to promote one policy objective or another. We’ve seen contracts for difference, capacity markets, renewable obligation certificates – the purity of a free market has long since been left behind. But there’s still an underlying assumption that someone will be running a spreadsheet to calculate a net present value for any new energy investment.

Cost discipline does matter, but it’s important to recognise that these calculations, for investments that will be generating income for multiple decades, rest on projections of market conditions running many years in the future. But what this current episode should tell us is that the future course of energy markets is beset by what the economists call “Knightian uncertainty”. On the reliability of predictions of future energy prices, the lesson of the past, reinforced by what’s happening to gas prices now, is that no-one knows anything.

Energy can’t be left to the market, because the future state of the market is unknowable – but the need for energy is an inescapable ingredient of a modern economy and society. For something that is so important, building resilience into the system may be more important than maximising some notional net present value whose calculation depends on guesses about the state of the world over decades. This is even more true when we factor in the externalities imposed by the effect of fossil fuels on climate change, whose cost and impact remains so uncertain. To be more positive, there are uncertainties on the upside – the reductions in cost that an aggressive programme of low carbon research, development and deployment-driven innovation could bring. Rather than relying entirely on market forces, we have to design a resilient zero carbon energy system and get on with building it out.

Fighting Climate Change with Food Science

The false claim that US President Biden’s Climate Change Plan would lead to hamburger rationing has provided a predictably useful attack line for his opponents. But underlying this further manifestation of the polarisation of US politics, there is a real issue – producing the food we eat does produce substantial greenhouse gas emissions, and a disproportionate amount of these emissions come from eating the meat of ruminants like cattle and sheep.

According to a recent study, US emissions from the food system amount to 5 kg a person a day, and 47% of this comes from red meat. Halving the consumption of animal products by would reduce the USA’s greenhouse gas emissions by about 200 million tonnes of CO2 equivalent, a bit more than 3% of the total value. In the UK, the official Climate Change Committee recommends that red meat consumption should fall by 20% by 2050, as part of the trajectory towards net zero greenhouse gas emissions by 2050, with a 50% decrease necessary if progress isn’t fast enough in other areas. At the upper end of the range possibilities, a complete global adoption of completely animal-free – vegan – diets has been estimated to reduce total global greenhouse gas emissions by 14%.

The political reaction to the false story about Biden’s climate change plan illustrates why a global adoption of veganism isn’t likely to happen any time soon, whatever its climate and other advantages might be. But we should be trying to reduce meat consumption, and it’s worth asking whether the development of better meat substitutes might be part of the solution. We are already seeing “plant-based” burgers in the supermarkets and fast food outlets, while more futuristically there is excitement about using tissue culture techniques to produce in vitro, artificial or lab-grown meat. Is it possible that we can use technology to keep the pleasure of eating meat while avoiding its downsides?

I think that simulated meat has huge potential – but that this is more likely to come from the evolution of the currently relatively low-tech meat substitutes rather than the development of complex tissue engineering approaches to cultured meat [1]. As always, economics is going to determine the difference between what’s possible in principle and what is actually likely to happen. But I wonder whether relatively small investments in the food science of making meat substitutes could yield real dividends.

Why is eating meat important to people? It’s worth distinguishing three reasons. Firstly, meat does provide an excellent source of nutrients (though with potential adverse health effects if eaten to excess). Secondly, It’s a source of sensual pleasure, with a huge accumulated store of knowledge and technique about how to process and cook it to produce the most delicious results. Finally, eating meat is freighted with cultural, religious and historical significance. What kind of meat one’s community eats (or indeed, if it it eats meat at all), when families eat or don’t eat particular meats, all of these have deep historical roots. In many societies access to abundant meat is a potent signifier of prosperity and success, both at the personal and national level. It’s these factors that make calls for people to change their diets so political sensitive to this day.

So how is it realistic to imagine replacing meat with a synthetic substitute? The first issue is easy – replacing meat with foods of plant origin of equivalent nutritional quality is straightforward. The third issue is much harder – cultural change is difficult, and some obvious ways of eliminating meat run into cultural problems. A well-known vegetarian cookbook of my youth was called “Not just a load of old lentils” – this was a telling, but not entirely successful attempt to counteract an unhelpful stereotype head-on. So perhaps the focus should be on the second issue. If we can produce convincing simulations of meat that satisfy the sensual aspects and fit into the overall cultural preconceptions of what a “proper” meal looks like – in the USA or the UK, burger and fries, or a roast rib of beef – maybe we can meet the cultural issue halfway.

So what is meat, and how can we reproduce it? Lean meat consists of about 75% water, 20% protein and 3% fat. If it was just a question of reproducing the components, synthetic meat would be easy. An appropriate mixture of, say, wheat protein and pea protein (a mixture is needed to get all the necessary amino acids), some vegetable oil, and some trace minerals and vitamins, dispersed in water would provide all the nutrition that meat does. This would be fairly tasteless, of course – but given the well developed modern science of artificial flavours and aromas, we could fairly easily reproduce a convincing meaty broth.

But this, of course, misses out the vital importance of texture. Meat has a complex, hierarchical structure, and the experience of eating it reflects the way that structure is broken down in the mouth and the time profile of the flavours and textures it releases. Meat is made from animal muscle tissue, which develops to best serve what that particular muscle needs to do for the animal in its life. The cells in muscle are elongated to make fibres; the fibres bundle together to create the grain that’s familiar when we cut meat, but they also need to incorporate the connective tissue that allows the muscle to exert forces on the animal’s bones, and the blood-carrying vascular system that conveys oxygen and nutrients to the working muscle fibres. All of this influences the properties of the tissue when it becomes meat. The connective tissue is dominated by the protein material collagen, which consists of long molecules tightly bound together in triple helices.

Muscles that do a lot of work – like the lower leg muscles that make up the beef cuts known as shin or leg – have a lot of connective tissue. These cuts of meat are very tough, but after long cooking at low temperatures the collagen breaks down; the triple helices come apart, and the separated long molecules give a silky texture to the gravy, enhanced by the partial reformation of the helical junctions as it cools. In muscles that do less work – like the underside of the loin that forms the fillet in beef – there is much less connective tissue, and the meat is very tender even without long cooking.

High temperature grilling creates meaty flavours through a number of complex chemical reactions known as Maillard reactions, which are enhanced in the presence of carbohydrates in the flour and sugar that are used for barbecue marinades. Other flavours are fat soluble, carried in the fat cells characteristic of meat from well-fed animals that develop “marbling” of fat layers in the lean muscle. All of these characteristics are developed in the animal reflecting the life it leads before slaughter, and are developed further after butchering, storage and cooking.

In “cultured” meat, individual precursor cells derived from an animal are grown in a suitable medium, using a “scaffold” to help the cells organise to form something resembling natural muscle tissue. There a a couple of key technical issues with this. The first is the need to provide the right growth medium for the cells, to provide an energy source, other nutrients, and the growth factors that simulate the chemical communications between cells in whole organisms.

In the cell culture methods that have been developed for biomedical applications, the starting point for these growth media has been sera extracted from animal sources like cows. These are expensive – and obviously can’t produce an animal free product. Serum free growth media have been developed but are expensive, and optimising, scaling up and reducing the cost of these represent key barriers to be overcome to make “cultured meat” viable.

The second issue is reproducing the vasculature of real tissue, the network of capillaries that conveys nutrients to the cells. It’s this that makes it much easier to grow a thin layer of cells than to make a thick, steak-like piece. Hence current proofs of principle of cultured meat are more likely to produce mince meat for burgers rather than whole cuts.

I think there is a more fundamental problem in making the transition from cells, to tissue, to meat. One can make a three dimensional array of cells using a “scaffold” – a network of some kind of biopolymer that the cells can attach to and which guides their growth in the way that a surface does in a thin layer. But we know that the growth of cells is influenced strongly by the mechanical stimuli they are exposed to. This is obvious at the macroscopic scale – muscles that do more work, like leg muscles, grow in a different way that ones that do less – hence the difference between shin of beef and fillet steak. I find it difficult to see how, at scale, one could reproduce these effects in cell culture in a way that produces something that looks more like a textured piece of meat rather than a vaguely meaty mush.

I think there is a simpler approach, which builds on the existing plant-based substitutes for meat already available in the supermarket. Start with a careful study of the hierarchical structures of various meats, at scales from the micron to the millimetre, before and after cooking. Isolate the key factors in the structure that produce a particular hedonic response – e.g. the size and dispersion of the fat particles, and their physical state; the arrangement of protein fibres, the disposition of tougher fibres of connective tissue, the viscoelastic properties of the liquid matrix and so on. Simulate these structures using plant derived materials – proteins, fats, gels with different viscoelastic properties to simulate connective tissue, and appropriate liquid matrices, devising processing routes that use physical processes like gelation and phase separation to yield the right hierarchical structure in a scalable way. Incorporate synthetic flavours and aromas in controlled release systems localised in different parts of the structure. All this is a development and refinement of existing food technology.

At the moment, attempting something like this, we have start-ups like Impossible Burger and Beyond Meat, with new ideas and some distinct intellectual property. There are established food multinationals, like Unilever, moving in with their depth of experience in branding, distribution and deep food science expertise. We already have products, many of which are quite acceptable in the limited market niches they are aiming at (typically minced meat for burgers and sauces). We need to move now to higher value and more sophisticated products, closer to whole cuts of meat. To do this we need some more basic food science research, drawing on the wide academic base in the life sciences, and integrating this with the chemical engineering for making soft matter systems with complex heterogenous structures at scale, often by non-equilibrium self-assembly processes.

Food science is currently rather an unfashionable area, with little funding and few institutions focusing on it (for example, the UK’s former national Institute of Food Research in Norwich has pivoted away from classical food science to study the effect of the microbiome on human health). But I think the case for doing this is compelling. The strong recent rise in veganism and vegetarianism creates a large and growing market. But it does need public investment, because I don’t think intellectual property in this area will be very easy to defend. For this reason, large R&D investments by individual companies alone may be difficult to justify. Instead we need consortia bringing together multinationals like Unilever and players further downstream in the supply chain, like the manufacturers of ready meals and suppliers to fast food outlets, together with a relatively modest increase in public sector applied research. Food science may not be as glamorous as a new approach to nuclear fusion, but maybe turn out to be just as important in the fight against climate change.

[1]. See also this interesting article by Alex Smith and Saloni Shah – The Government Needs an Innovation Policy for Alternative Meats – which makes the case for an industrial strategy for alternative meats, but is more optimistic about the prospects for cell culture than I am.

Measuring up the UK Government’s ten-point plan for a green industrial revolution

Last week saw a major series of announcements from the government about how they intend to set the UK on the path to net zero greenhouse gas emissions. The plans were trailed in an article (£) by the Prime Minister in the Financial Times, with a full document published the next day – The ten point plan for a green industrial revolution. “We will use Britain’s powers of invention to repair the pandemic’s damage and fight climate change”, the PM says, framing the intervention as an innovation-driven industrial strategy for post-covid recovery. The proposals are patchy, insufficient by themselves – but we should still welcome them as beginning to recognise the scale of the challenge. There is a welcome understanding that decarbonising the power sector is not enough by itself. The importance of emissions from transport, industry and domestic heating are all recognised, and there is a nod to the potential for land-use changes to play a significant role. The new timescale for the phase-out of petrol and diesel cars is really significant, if it can be made to stick. So although I don’t think the measures yet go far enough or fast enough, one can start to see the outline of what a zero-emission economy might look like.

In outline, the emerging picture seems to be of a power sector dominated by offshore wind, with firm power provided either by nuclear or fossil fuels with carbon capture and storage. Large scale energy storage isn’t mentioned much, though possibly hydrogen could play a role there. Vehicles will predominantly be electrified, and hydrogen will have a role for hard to decarbonise industry, and possibly domestic heating. Some hope is attached to the prospect for more futuristic technologies, including fusion and direct air capture.

To move on to the ten points, we start with a reassertion of the Manifesto commitment to achieve 40 GW of offshore wind installed by 2030. How much is this? At a load factor of 40%, this would produce 140 TWh a year; for comparison, in 2019, we used a total 346 TWh of electricity. Even though this falls a long way short of what’s needed to decarbonise power, a build out of offshore wind on this scale will be demanding – it’s a more than four-fold increase on the 2019 capacity. We won’t be able to expand the capacity of offshore wind indefinitely using current technology – ultimately we will run out of suitable shallow water sites. For this reason, the announcement of a push for floating wind, with a 1 GW capacity target, is important.

On hydrogen, the government is clearly keen, with the PM saying “we will turn water into energy with up to £500m of investment in hydrogen”. Of course, even this government’s majority of 80 isn’t enough to repeal the laws of thermodynamics; hydrogen can only be an energy store or vector. As I’ve discussed in an earlier post (The role of hydrogen in reaching net zero), hydrogen could have an important role in a low carbon energy system, but one needs to be clear about how the hydrogen is made in a zero-carbon way, and how it is used, and this plan doesn’t yet provide that clarity.

The document suggests the first use will be in a natural gas blend for domestic heating, with a hint that it could be used in energy intensive industry clusters. The commitment is to create 5 GW of low carbon hydrogen production capacity by 2030. Is this a lot? Current hydrogen production amounts to 3 GW (27 TWh/year), used in industry and (especially) for making fertiliser, though none of this is low carbon hydrogen – it is made from natural gas by steam methane reforming. So this commitment could amount to building another steam reforming methane plant and capturing the carbon dioxide – this might be helpful for decarbonising industry, on on Deeside or Teeside perhaps. To give a sense of scale, total natural gas consumption in industry and homes (not counting electricity generation) equates to 58 GW (512 TWh/year), so this is no more than a pilot. In the longer term, making hydrogen by electrolysis and/or process heat from high temperature fission is more likely to be the scalable and cost-effective solution, and it is good that Sheffield’s excellent ITM Power gets a namecheck.

On nuclear power, the paper does lay out a strategy, but is light on the details of how this will be executed. For more detail on what I think has gone wrong with the UK’s nuclear strategy, and what I think should be done, see my earlier blogpost: Rebooting the UK’s nuclear new build programme. The plan here seems to be for one last heave on the UK’s troubled programme of large scale nuclear new build, followed up by a possible programme implementing a light water small modular reactor, with research on a new generation of small, high temperature, fourth generation reactors – advanced modular reactors (AMRs). There is a timeline – large-scale deployment of small modular reactors in the 2030’s, together with a demonstrator AMR around the same timescale. I think this would be realistic if there was a wholehearted push to make it happen, but all that is promised here is a research programme, at the level of £215 m for SMRs and £170m for AMRs, together with some money for developing the regulatory and supply chain aspects. This keeps the programme alive, but hardly supercharges it. The government must come up with the financial commitments needed to start building.

The most far-reaching announcement here is in the transport section – a ban on sales of new diesel and petrol car sales after 2030, with hybrids being permitted until 2035, after which only fully battery electric vehicles will be on sale. This is a big deal – a major effort will be required to create the charging infrastructure (£1.3 bn is ear-marked for this), and there will need to be potentially unpopular decisions on tax or road charging to replace the revenue from fuel tax. For heavy goods vehicles the suggestion is that we’ll have hydrogen vehicles, but all that is promised is R&D.

For public transport the solutions are fairly obvious – zero-emission buses, bikes and trains – but there is a frustrating lack of targets here. Sometimes old technologies are the best – there should be a commitment to electrify all inter-city and suburban lines as fast as feasible, rather than the rather vague statement that “we will further electrify regional and other rail routes”.

In transport, though, it’s aviation that is the most intractable problem. Three intercontinental trips a year can double an individual’s carbon footprint, but it is very difficult to see how one can do without the energy density of aviation fuel for long-distance flight. The solutions offered look pretty unconvincing to me – “we are investing £15 million into FlyZero – a 12-month study, delivered through the Aerospace Technology Institute (ATI), into the strategic, technical and commercial issues in designing and developing zero-emission aircraft that could enter service in 2030.” Maybe it will be possible to develop an electric aircraft for short-haul flights, but it seems to me that the only way of making long-distance flying zero-carbon is by making synthetic fuels from zero-carbon hydrogen and carbon dioxide from direct air capture.

It’s good to see the attention on the need for greener buildings, but here the government is hampered by indecision – will the future of domestic heating be hydrogen boilers or electric powered heat pumps? The strategy seems to be to back both horses. But arguably, even more important than the way buildings are heated is to make sure they are as energy-efficient as possible in the first place, and here the government needs to get a grip on the mess that is our current building regulation regime. As the Climate Change Committee says, “making a new home genuinely zero-carbon at the outset is around five times cheaper than retrofitting it later” – the housing people will be living in in 2050 is being built today, so there is no excuse for not ensuring the new houses we need now – not least in the neglected social housing sector – are built to the highest energy efficiency standards.

Carbon capture, usage and storage is the 8th of our 10 points, and there is a commendable willingness to accelerate this long-stalled programme. The goal here is “to capture 10Mt of carbon dioxide a year by 2030”, but without a great deal of clarity about what this is for. The suggestion that the clusters will be in the North East, the Humber, North West, and in Scotland and Wales suggests a goal of decarbonising energy intensive sectors, which in my view is the best use of this problematic technology (see my blogpost: Carbon Capture and Storage: technically possible, but politically and economically a bad idea). What’s the scale proposed here – is 10 Mt of carbon a year a lot or a little? Compared to the total CO2 emissions for the UK – 350 Mt in 2019 – it isn’t much, but on the other hand it is roughly in line with the total emissions of the iron and steel industry in the UK, so as an intervention to reduce the carbon intensity of heavy industry it looks more viable. The unresolved issue is who bears the cost.

There’s a nod to the effects of land-use changes, in the section on protecting the natural environment. There are potentially large gains to be had here in projects to reforest uplands and restore degraded peatlands, but the scale of ambition is relatively small.

Finally, the tenth point concerns innovation, with the promise of a “£1 billion Net Zero Innovation Portfolio” as part of the government’s aspiration to raise the UK’s R&D intensity to 2.4% of GDP by 2027. The R&D is to support the goals in the 10 point plan, with a couple of more futuristic bets – on direct air capture, and on commercial fusion power through the Spherical Tokomak for Energy Production project.

I think R&D and innovation are enormously important in the move to net zero. We urgently need to develop zero-carbon technologies to make them cheaper and deployable at scale. My own somewhat gloomy view (see this post for more on this: The climate crisis now comes down to raw power) is that, taking a global view incorporating the entirely reasonable aspiration of the majority of the world’s population to enjoy the same high energy lifestyle that is to be found in the developed world, the only way we will effect a transition to a zero-carbon economy across the world is if the zero-carbon technologies are cheaper – without subsidies – than fossil fuel energy. If those cheap, zero-carbon technologies can be developed in the UK, that will make a bigger difference to global carbon budgets than any unilateral action that affects the UK alone.

But there is an important counter-view, expressed cogently by David Edgerton in a recent article: Cummings has left behind a No 10 deluded that Britain could be the next Silicon Valley. Edgerton describes a collective credulity in the government about Britain’s place in the world of innovation, which overstates the UK’s ability to develop these new technologies, and underestimates the degree to which the UK will be dependent on innovations developed elsewhere.

Edgerton is right, of course – the UK’s political and commentating classes have failed to take on board the degree to which the country has, since the 1980’s, run down its innovation capacity, particularly in industrial and applied R&D. In energy R&D, according to recent IEA figures, the UK spends about $1.335 billion a year – some 4.3% of the world total, eclipsed by the contributions of the USA, China, the EU and Japan.

Nonetheless, $1.3 billion is not nothing, and in my opinion this figure ought to increase substantially both in absolute terms, and as a fraction of rising public investment in R&D. But the UK will need to focus its efforts in those areas where it has unique advantages; while in other areas international collaboration may be a better way forward.

Where are those areas of unique advantage? One such probably is offshore wind, where the UK’s Atlantic location gives it a lot of sea and a lot of wind. The UK currently accounts for about 1/3 of all offshore wind capacity, so it represents a major market. Unfortunately, the UK has allowed the situation to develop where the prime providers of its offshore wind technology are overseas. The plan suggests more stringent targets for local content, and this does make sense, while there is a strong argument that UK industrial strategy should try and ensure that more of the value of the new technologies of deepwater floating wind are captured in the UK.

While offshore wind is being deployed at scale right now, fusion remains speculative and futuristic. The government’s strategy is to “double down on our ambition to be the first country in the world to commercialise fusion energy technology”. While I think the barriers to developing commercial fusion power – largely in materials science – remain huge, I do believe the UK should continue to fund it, for a number of reasons. Firstly, there is a possibility that it might actually work, in which case it would be transformative – it’s a long odds bet with a big potential payoff. But why should the UK be the country making the bet? My answer would be that, in this field, the UK is genuinely internationally competitive; it hosts the Joint European Torus, and the sponsoring organisation UKAEA retains, rare in UK, capacity for very complex engineering at scale. Even if fusion doesn’t deliver commercial power, the technological spillovers may well be substantial.

The situation in nuclear fission is different. The UK dramatically ran down its research capacity in civil nuclear power, and chose instead to develop a new nuclear build programme on the basis of entirely imported technology. This was initially the French EPR currently being built in Hinkley Point, with another another type of pressurised water reactor, from Toshiba, to be built in Cumbria, and a third type of reactor, a boiling water reactor from Hitachi, in Anglesea. That hasn’t worked out so well, with only the EPRs now looking likely to be built. The current strategy envisages a reset, with a new programme of light water small modular reactors – that is to say, a technologically conservative PWR designed with an emphasis on driving its capital cost down, followed by work on a next generation fission reactor. These “advanced modular reactors” would be relatively small high temperature reactor. The logic for the UK to be the country to develop this technology is that it is only country that has run an extensive programme of gas cooled reactors, but it still probably needs collaboration with other like-minded countries.

How much emphasis should the UK put into developing electric vehicles, as opposed to simply creating the infrastructure for them and importing the technology? The automotive sector still remains an important source of added value for the UK, having made an impressive recovery from its doldrums in the 90’s and 00’s. Jaguar Land Rover, though owned by the Indian conglomerate Tata, is still essentially a UK based company, and it has an ambitious development programme for electric vehicles. But even with its R&D budget of £1.8 bn a year, it is a relative minnow by world standards (Volkswagen’s R&D budget is €13bn, and Toyota’s only a little less); for this reason it is developing a partnership with BMW. The government should support the UK industry’s drive to electrify, but care will be needed to identify where UK industry can find the most value in global supply chains.

A “green industrial strategy” is often sold on the basis of the new jobs it will create. It will indeed create more jobs, but this is not necessarily a good thing. If it takes more people, more capital, more money to produce the same level of energy services – houses being heated, iron being smelted, miles driven in cars and lorries – then that amounts to a loss of productivity across the economy as a whole. Of course this is justified by the huge costs that burning fossil fuels impose on the world as a whole through climate change, costs which are currently not properly accounted for. But we shouldn’t delude ourselves. We use fossil fuels because they are cheap, convenient, and easy to use, and we will miss them – unless we can develop new technologies that supply the same energy services at a lower cost, and that will take innovation. New low carbon energy technologies need to be developed, and existing technologies made cheaper and more effective.

To sum up, the ten point plan is a useful step forward, The contours of a zero-emissions future are starting to emerge, and it is very welcome that the government has overcome its aversion to industrial strategy. But more commitment and more realism is required.