Revisiting the UK’s nuclear AGR programme: 2. What led to the AGR decision? On nuclear physics – and nuclear weapons

This is the second of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government. In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects, and in particular, the influence of an article by Dennis Henderson that was highly critical of the AGR decision. In this post, I go into some detail to try to understand why the decision was made.

According to Thomas Kelsey, writing in his article When Missions Fail: Lessons in “High Technology” from post-war Britain, the decision to choose the Advanced Gas Cooled reactor design for the UK’s second generation reactor programme was forced through by “state technocrats, hugely influential scientists and engineers from the technical branches of the civil service”; sceptics did exist, but they were isolated in different departmental silos, and unable to coordinate their positions to present a compelling counter view.

But why might the scientists and engineers have been so convinced that the AGR was the right way to go, rather than the rival US designed Pressurised Water Reactor, making what Henderson argued, in his highly influential article “Two British Errors: Their Probable Size and Some Possible Lessons”, was one of the UK government’s biggest policy errors? To go some way to answering that, it’s necessary to consider both physics and history.

Understanding the decision to choose advanced gas cooled reactors: the physics underlying nuclear reactor design choices

To start with the physics, what are the key materials that make up a fission reactor, and what influences the choice of materials?

Firstly, one needs a fissile material, which will undergo a chain reaction – a nucleus, that when struck by a neutron, will split, releasing energy, and emitting a handful of extra neutrons, that go on to cause more fission. The dominant fissile material in today’s civil nuclear programmes is Uranium-235, the minority isotope that makes up 0.72% of natural uranium (the rest of it being uranium-238, which is mildly radioactive but not fissile). To make reactor fuel, one generally needs to “enrich” the uranium, increasing the concentration of U-235, typically, for civil purposes, to a few percent. Enrichment is a complex technology inextricably connected with nuclear weapons – the enrichment needed to make weapons grade uranium is different in degree, not kind, from that needed for civil power. One also needs to consider how the fissile material – the nuclear fuel – is to be packaged in the reactor.

Secondly, one needs a moderator. The neutrons produced in fission reactions are going too fast to be efficient at inducing further fissions, so they need to be slowed down. (As I’ll discuss below, it is possible to have a reactor without moderation – a so-called fast-neutron reactor. But because of the lower absorption cross-section for fast neutrons, this needs to use a much higher fraction of fissile material – highly enriched uranium or plutonium).

In a normal reactor, the purpose of the moderator is to slow down the neutrons. Moderators need to be made of a light element which doesn’t absorb neutrons too much. The main candidates are carbon (in the form of graphite), hydrogen (in the form of water) or deuterium, the heavier isotope of hydrogen (in the form of water). Hydrogen absorbs neutrons more than deuterium does, so it’s less ideal as a moderator, but is obviously much cheaper.

Finally, one needs a coolant, which takes away the heat the fission reactor produces, so the heat can be extracted and converted to electricity in some kind of turbine. The choice here, in currently operating reactors, is between normal water, heavy water, and a non-reactive gas (either carbon dioxide or helium). Experimental designs use more exotic cooling materials like molten salts and liquid metals.

So the fundamental design choice for a reactor is the choice of moderator and coolant – which dictate, to some extent, the nature of the fuel. The variety of possible combinations of moderators and coolants means that the space of possible reactor designs is rather large, but only a handful from this choice of potentials technologies is in widespread use. The most common choice is to use ordinary water as both coolant and moderator – in so-called light water reactors (“light water” in contrast to “heavy water”, in which the normal hydrogen of ordinary water is replaced by hydrogen’s heavier isotope, deuterium). Light water is an excellent coolant, cheap, and convenient to use to drive a steam turbine to generate electricity. But it’s not a great moderator – it absorbs neutrons, so a light water reactor needs to use enriched uranium as fuel, and the core needs to be relatively small.

These weren’t problems for the original use of pressurised water reactors (PWRs, the most common type of light water reactor. The other variety, Boiling Water Reactors, similarly uses light water as both coolant and moderator, the difference with PWRs being that steam is generated directly in the reactor core rather than in a secondary circuit). These were designed to power submarines, in a military context where enriched uranium was readily available, and where a compact size is a great advantage. But it underlies the great weakness of light water reactors – their susceptibility to what’s known as a “loss of coolant accident”. The problem is that, if for some reason the flow of cooling water is stopped, even if the chain reaction is quickly shut down (and this isn’t difficult to do) the fuel produces so much heat through its radioactive decay that it can melt the fuel rods, as happened in Three Mile Island. What’s worse, the alloy that the fuel rod is clad in can react with hot steam to produce hydrogen, that can explode, as happened at Fukushima.

In contrast to light water, heavy water is an excellent moderator. Although deuterium and (normal) hydrogen are (nearly) chemically identical, the interaction of neutrons with their nuclei is very different – deuterium absorbs neutrons much less than hydrogen. Heavy water is just as good a coolant as light water, so a reactor with heavy water as both moderator and coolant can be run with unenriched uranium oxide as fuel. The tradeoff, then, is the ability to do without a uranium enrichment plant, at the cost having to use expensive and hard to make heavy water in large quantities. This is the basis of the Canadian CANDU design.

Another highly effective moderator is graphite (if it’s of sufficiently high purity). But being a solid, a separate coolant is needed. The UK’s Magnox stations used carbon dioxide as a coolant and natural, unenriched uranium metal as a fuel; it was a development of this design that formed the Advanced Gas Cooled Reactor (AGR), which used lightly enriched uranium oxide as a fuel. The use of gas rather than water as the coolant makes it possible to run the reactor at a higher temperature, which allows a more efficient conversion of heat to electricity, while the lower neutron absorption of the moderator and coolant than for light water means that the core is less compact.

Another approach is to use graphite as the moderator, but to use light water as the coolant. The use of light water reduces the neutron efficiency of the design, so the fuel needs to be lightly enriched. This is the basis of the Soviet Union’s RBMK reactor. This design is cheap to build, but it has a very ugly potential failure mode. If the cooling water starts to boil, the bubbles of steam absorb fewer neutrons than the water they replace, and this means the efficiency of the chain reaction can increase, leading to a catastrophic runaway loss of control of the fission reaction. This is what happened at Chernobyl, the world’s worst nuclear accident to date.

Understanding the decision to choose advanced gas cooled reactors: the history of the UK nuclear weapons programme, and its influence on the civil nuclear programme

In the beginning, the purpose of the UK’s nuclear programme was to produce nuclear weapons – and the same can be said of other nuclear nations, USA and USSR, France and China, India and Pakistan, Israel and North Korea. The physics of the fission reaction imposes real constraints on the space of possible reactor designs – but history sets a path-dependence to the way the technology evolved and developed, and this reflects the military origins of the technology.

A nuclear weapon relies on the rapid assembly of a critical mass of a highly fissile material. One possible material is uranium – but since it’s only the minority Uranium-235 isotope that is fissile, it’s necessary to separate this from the Uranium-238 that constitutes 99.28% of the metal as it is found in nature. The higher the degree of enrichment, the smaller the critical mass required; in practise, enrichments over 60% are needed for a weapon. There is an alternative – to use the wholly artificial element plutonium. The fissile isotope plutonium-239 is formed when uranium-238 absorbs a neutron, most conveniently in a fission reactor.

As the history of nuclear weapons is usually told, it is the physicists who are usually given the most prominent role. But there’s an argument that the crucial problems to be overcome were as much ones of chemical engineering as physics. There is no chemical difference between the two uranium isotopes that need to be separated, so any process needs to rely on physical properties that depend on the tiny difference in mass between the two isotopes. On the other hand, to obtain enough plutonium to build a weapon, one needs not just to irradiate uranium in a reactor, but then use chemical techniques to extract the plutonium from a highly radioactive fuel element.

In 1941, the wartime UK government had concluded, based on the work of the so-called MAUD committee, that nuclear weapons were feasible, and began an R&D project to develop them – codenamed “Tube Alloys”. In 1943 the UK nuclear weapons programme was essentially subsumed by the Manhattan Project, but it was always the intention that the UK would develop nuclear weapons itself when the war ended. The pre-1943 achievements of Tube Alloys are often overlooked in the light of the much larger US programme, but one feature of it is worth pointing out. The UK programme was led by the chemical giant ICI; this was resented by the academic physicists who had established the principles by which nuclear weapons would work. However, arguably it represented a realistic appraisal of where the practical difficulties of making a weapon would lie – in obtaining sufficient quantities of the fissile materials needed. Tube Alloys pursued an approach to uranium enrichment based on the slightly different mass-dependent diffusion rates of uranium hexafluoride through porous membranes. This relied on the expertise in fluorine chemistry developed by ICI in Runcorn in the 1930’s, and came to fruition with the establishment of a full-scale gaseous diffusion plant in Capenhurst, Cheshire, in the late 40s and early 50s.

After the war, the UK was cut off from the technology developed by the USA in the Manhattan project, with the 1946 McMahon Act formally prohibiting any transfer of knowledge or nuclear materials outside the USA. The political imperative for the UK to build its own nuclear weapon is summed up by the reported comments of Ernest Bevin, the Foreign Secretary in the postwar Labour government: “We’ve got to have this thing over here, whatever it costs. We’ve got to have the bloody Union Jack on top of it.”

But even before a formal decision to make a nuclear weapon was made, in 1947, the infrastructure for the UK’s own nuclear weapons programme had been put in place, reflecting the experience of the returning UK scientists who had worked on the Manhattan Project. The first decision was to build a nuclear reactor in the UK, to make plutonium. This reflected the experience of the Manhattan project, which had highlighted the potential of the plutonium route to a nuclear weapon.

To put it crudely, it turned out to be easier to make a bomb from highly enriched uranium than from plutonium, but it was easier to make plutonium than highly enriched uranium. The problem with the plutonium route to the bomb is that irradiating uranium-235 with neutrons produces not just the fissile isotope Plutonium-239, but trace amounts of another isotope, Plutonium-240. Plutonium-240 undergoes spontaneous fission, emitting neutrons. Because of this the simplest design of a nuclear weapon – the gun design used for the Hiroshima bomb – will not work for plutonium, as the spontaneous fission causes premature detonation and low explosive yields. This problem was solved by the development of the much more complex implosion design, but there are still hard limits on the levels of plutonium-240 that can be tolerated in weapons grade plutonium, and these impose constraints on the design of reactors used to produce it.

The two initial UK plutonium production reactors were built in Sellafield – the Windscale Piles. The fuel was natural, unenriched, uranium (necessarily, because the uranium enrichment plant in Capenhurst had not yet been built), so this dictated the use of a graphite moderator. The reactors were air-cooled. The first reactor started operations in 1951, with the first plutonium produced in early 1952, enabling the UK’s first, successful, nuclear weapon test in October 1952.

But even as the UK’s first atom bomb test was successful, it was clear that the number of weapons the UK’s defense establishment was calling for would demand more plutonium than the Windscale piles could produce. At the same time, there was growing interest in using nuclear energy to generate electricity, at a time when coal was expensive and in short supply, and oil had to be imported and paid for with scarce US dollars. The decision was made to combine the two goals, with second generation plutonium producing reactors also producing power. The design would use graphite moderation, as in the Windscale piles, and natural uranium as a fuel, but rather than being air-cooled, the coolant was high pressure carbon dioxide. The exclusion of air made it possible to use a magnesium alloy as the casing for the fuel, which absorbed fewer neutrons than the aluminium used before.

The first of this new generation of dual purpose reactors – at Calder Hall, near Sellafield – was opened in 1956, just four years after the decision to build it. Ultimately four reactors of this design were produced – two at Calder Hall, and two at Chapelcross in Scotland. It’s important to stress that, although these reactors did supply power to the grid, they were optimised to produce plutonium for nuclear weapons, not to produce electricity efficiently. The key feature that this requirement dictated was the need to remove the fuel rods while the reactor was running; for weapons grade plutonium the exposure of uranium-238 to neutrons needs to be limited, to keep the level of undesirable plutonium 240 low. From the point of view of power production, this is sub-optimal, as it significantly lowers the effective fuel efficiency of the reactor; it also produces significantly greater quantities of nuclear waste.

The first generation of UK power reactors – the Magnox power stations – were an evolution of this design. Unlike Calder Hall and Chapelcross, they were under control of the Central Electricity Generating Board, rather than the Atomic Energy Authority, and were run primarily to generate electricity rather than weapons grade plutonium, using longer burn up times that produced plutonium with high concentrations of Pu-240. This so-called “civil plutonium” was separated from the irradiated fuel – there is now a stockpile of about 130 tonnes of this. Did the civil Magnox reactors produce any weapons grade plutonium? I don’t know, but I believe that there is no technical reason that would have prevented that.

Fast neutron reactors and the breeder dream

A reactor that doesn’t have a moderator is known as a fast-neutron reactor. This uses neutrons at the energy they have when emitted from the fission reaction, without slowing them down in a moderator. As mentioned above, the probability of a fast neutron colliding with a fissile nucleus is smaller than for a slow neutron, so this means that a fast-neutron reactor needs to use a fuel with a high proportion of fissile isotopes – either uranium highly enriched in U-235, or plutonium (both need to be in the form of the oxide, so the fuel doesn’t melt). In the absence of a moderator, the core of a fast neutron reactor is rather small, producing a lot of heat in a very small volume. This means that neither water nor gas is good enough as a coolant – fast neutron reactors to date have instead used liquid metal, most commonly molten sodium. As one might imagine, this poses considerable engineering problems.

But fast-neutron reactors have one remarkable advantage which has made many countries persist with a fast-neutron reactor programme, despite the difficulties. A fission reaction prompted by a fast neutron produces, on average, more additional neutrons than fission prompted by a slow neutron. This means that a fast-neutron reactor can produce more neutrons than are needed to maintain the chain reaction, and these additional neutrons can be used to “breed” additional fissile material. In effect, a fast-neutron reactor can produce more reactor fuel than it consumes, for example by converting non-fissile uranium-238 into fissile plutonium-239, or converting non-fissile thorium-232 into another fissile isotope of uranium, uranium-233.

In the 1940s and 50s, the availability of uranium relative to the demand of weapons programmes was severely limited, so the prospect of extracting energy from the much more abundant U-238 isotope was very attractive. Design studies for a UK fast neutron reactor started as early as 1951, with the strong backing of Christopher Hinton, the hard-driving ex-ICI engineer who ran the UK’s nuclear programme. An experimental fast reactor was built at Dounreay, in Caithness, which was completed by 1959. Using this experience, it was decided in 1966 to build a prototype fast power reactor, cooled with liquid sodium, with a 250 MW design electrical output.

The worldwide expansion of nuclear power in the 1970s seemed to strengthen the case for a breeder reaction even further, so the commissioning of the prototype fast reactor in 1974 seemed timely. However, in common with the experience of fast reactors elsewhere in the world, reliability was a problem, and the Dounreay reactor never achieved even 50% of its design output. Moreover, following the 1979 Three Mile Island accident, the worldwide expansion of nuclear power stalled, and the price of Uranium collapsed, undercutting the economic rationale for breeder reactors.

The winding down of the UK’s experiment with fast breeders was announced in Parliament in 1988: “The Government have carried out a review of the programme in the light of the expectation that commercial deployment of fast reactors in the United Kingdom will not now be required for 30 to 40 years. Our overall aim in the review has been to retain a position in the technology for the United Kingdom at economic cost.” Operations on the Dounreay prototype fast breeder came to an end in 1994, and in effect the UK’s position in the technology was lost. In the UK, as elsewhere in the world, the liquid metal cooled fast neutron breeder reactor proved a technological dead-end, where it remains – for now.

Submarines

Bombs are not the only military application of nuclear energy. Even before the 2nd World War ended, it was appreciated that a nuclear reactor would be an ideal power source for a submarine. Diesel-electric submarines need to surface frequently to run their engines and recharge their batteries; a submarine with a long-term power source that didn’t need oxygen, able to remain underwater for months on end, would be transformational for naval warfare. In the UK, work on a naval reactor began in the early 1950’s, and the UK’s first nuclear powered submarine, HMS Dreadnought, was launched in 1960. But HMS Dreadnought didn’t use UK nuclear technology; instead it was powered by a reactor of US design, a pressurised water reactor, using light water both as moderator and as coolant.

The father of the US nuclear navy was an abrasive and driven figure, Admiral Rickover. Rickover ran the US Navy’s project to develop a nuclear submarine, initially working at Oak Ridge National Laboratory in the late 1940’s. He selected two potential reactor designs – the pressurised water reactor devised by the physicist Alvin Weinberg, and a liquid sodium cooled, beryllium moderated reactor. Both were developed to the point of implementation, but it was the PWR that was regarded as the best (and particularly, the most reliable) design, and has been subsequently used for all Western nuclear submarines.

The prototype reactor went critical at a land-based test installation in 1953. At this time the first submarine was already under construction; the USS Nautilus went to sea only two years later, in 1955. The UK’s effort lagged considerably behind. In 1958, following the thawing of nuclear relations between the UK and the USA, Admiral Rickover offered the UK a complete nuclear propulsion system. It seems that this deal was sealed entirely on the basis of the personal relationship between Rickover and the UK’s Admiral of the Fleet, Lord Mountbatten. It came with two conditions. The first was that it should be a company to company deal, between the US contractor Westinghouse and the UK firm Rolls-Royce, rather than a government to government agreement. The second was that it was a one-off – Rolls-Royce would have a license to the Westinghouse design for a pressurised water reactor, but after that the UK was on its own. These two conditions have meant that there has been a certain separation between the UK’s naval reactor programme, as Rolls-Royce has developed further iterations of the naval PWR design, and the rest of its national nuclear enterprise.

Rickover’s rapid success in creating a working power reactor for submarines had far-reaching consequences for civil nuclear power. President Eisenhower’s 1953 “Atoms for Peace” speech committed the USA to developing civilian applications, and the quickest way to deliver on that was to build a nuclear power station building on the submarine work. Shippingport opened in 1957 – it was essentially a naval reactor repurposed to power a static power station, and was wholly uneconomic as an energy source, but it launched Westinghouse’s position as a supplier of civil nuclear power plants. Pressurised water reactors designed at the outset for civil use would evolve in a different direction to submarine reactors. For a submarine, reactors need to be highly compact, self-contained, and should be able to go for long periods without being refuelled, all of which dictates the use of highly enriched – essentially weapons grade – uranium. In civil use, to have any chance of being economic, uranium at much lower enrichment levels must be used, but designs can be physically bigger, and refuelling can be more frequent. By the 1960’s, Westinghouse was able to export civil PWRs to countries like Belgium and France, and it was a descendant of this design that was built in the UK at Sizewell B.

Imagined futures, alternative histories, and technological lock-in

The path of technological progress isn’t preordained, but instead finds a route through a garden of forking paths, where at each branch point the choice is constrained by previous decisions, and is influenced by uncertain guesses about where each of the different paths might lead.

So it’s a profound mistake to suppose that in choosing between different technological approaches to nuclear power, it is simply a question of choosing between a menu of different options. The choice depends on history – a chain of previous choices which have established which potential technological paths have been pursued and which ones have been neglected. It’s this that establishes what comprises the base of technological capability and underpinning knowledge – both codified and tacit – that will be exploited in the new technology. It depends on the existence of a wider infrastructure. A national nuclear programme comprises a system, which could include uranium enrichment facilities, fuel manufacturing, plutonium separation and other waste handling facilities – and, as we’ve seen, the scope of that system depends not just on a nation’s ambitions for civil nuclear power, but on its military ambitions and its weapons programme. And it depends on visions of the future.

In the early years of the Cold War, those visions were driven by paranoia, and a not unjustified fear of apocalypse. The McMahon act of 1946 had shut the UK out of any collaboration on nuclear weapons with the USA; the Soviet Union had demonstrated an atom bomb in 1949, following up in 1955 with a thermonuclear weapon in the megaton range. The architects of the UK nuclear programme – the engineer Christopher Hinton, and physicists William Penney and John Cockcroft, drove it forward with huge urgency. Achievements like delivering Calder Hall in just 4 years were remarkable – but achieved at the cost of cut corners and the accumulation of massive technical debt. We are still living with the legacy of that time – for example, in the ongoing, hugely expensive, clean-up of the nuclear waste left over in Sellafield from that period.

Energy worries dominated the 1970s, nationally and internationally. Conflicts in the Middle East led to an oil embargo and a major spike in the price of oil. The effect of this was felt particularly strongly in the USA, where domestic oil production had peaked in 1970, giving rise to fundamental worries about the worldwide exhaustion of fossil fuels. In the UK, industrial action in the coal mining industry led to rolling power cuts and a national three day week; the sense of national chaos leading to the fall of the Heath government. Fuel prices of all kinds – oil, coal and gas – seemed to be inexorably rising. For energy importers – and the UK was still importing around half its energy in the early 1970’s – security of energy supplies suddenly seemed fragile. In this environment, there was a wide consensus that the future of energy was nuclear, with major buildouts of nuclear power carried out in France, Germany, Japan and the USA.

By the 1990s, things looked very different. In the UK, the exploitation of North Sea oil and gas had turned the UK from an energy importer to an energy exporter. All aspects of fossil fuel energy generation and distribution had been privatised. In this world of apparent energy abundance, energy was just another commodity whose supply could safely be left to the market. And in an environment of high interest rates and low fuel prices, there was no place in the market for nuclear energy.

But if decisions about the technological directions are driven by visions of the future, they are constrained by the past. What is possible is determined by the infrastructure that’s been built already – uranium enrichment plants, reprocessing facilities, and so on. The nature of the stock of knowledge acquired in past R&D programmes will be determined by the problems that had emerged during those programmes, so starting work on a different class of reactors would render that knowledge less useful and necessitate new, expensive programmes of research. The skills and expertise that have been developed in past programmes – whether that is in the understanding of reactor physics that is needed to run them efficiently, or in the construction and manufacturing techniques to build them cheaply effectively – will be specific to the particular technologies that have been implemented in the past.

All this contributes to what is called “technological lock-in”. It isn’t obvious that the first class of power reactor ever developed – the pressurised water reactor – must be the optimum design, out of the large space of possible reactor types, particularly as it was originally designed for a different application – powering submarines – to the one it ended up being widely implemented for – generating power in static, civil power stations.

The UK’s decision to choose the Advanced Gas Cooled Reactor

So why did the UK’s state technocrats make the decision to roll out Advanced Gas Cooled reactors – and having made that decision, why did it take so long to reverse it? The straightforward answer is that this was another case of technological lock-in – the UK had developed an expertise in gas-cooled reactors which was genuinely world-leading, as a result of its decision in the Magnox programme to merge the goals of generating electricity and producing military plutonium. I believe there was a real conviction that the gas-cooled reactor was technically superior to the light-water designs, coupled with a degree of pride that this was an area that the UK had led in. As a UKAEA expert on gas-cooled reactors wrote in 1983, “Few other countries had the skills or resources to pioneer [gas-cooled reactors]; the easy option of the light water reactor developed by someone else has been irresistible”.

There were specific reasons to favour the AGR over PWRs – in particular, in the UK programmes there were worries about the safety of PWRs. These were particularly forcefully expressed by Sir Alan Cottrell, an expert on metallurgy and its applications in the nuclear industry, who was government Chief Scientific Advisor between 1971 and 1974. Perhaps, after Three Mile Island and Fukushima, one might wonder whether these worries were not entirely misplaced.

Later in the programme, while there may have been some agreement from its proponents that the early AGR building programme hadn’t gone well, there was a view that the teething problems had been more or less ironed out. I haven’t managed to find an authoritative figure for the final cost of the later AGR builds, but in 1980 it was reported in parliament that Torness was on track to be delivered at a budget of £1.1 bn (1980 prices), which is not a great deal different from the final cost of the Sizewell B PWR. Torness, like Sizewell B, took 8 years to build.

But I wonder whether the biggest factor in the UK’s nuclear establishment’s preference for the AGR over the PWR was from a sense that the AGR represented another step on a continuing path of technological progress, while the PWR was a mature technology whose future was likely to consist simply of incremental improvements. Beyond the AGRs, the UK’s nuclear technologists could look to the next generation of high temperature reactors, whose prototype – Dragon, at Winfrith – was already in operation, with the fast breeder reactor promising effectively unlimited fuel for a nuclear powered future. But that future was foreclosed by the final run-down of the UK’s nuclear programme in the 80s and 90s, driven by the logic of energy privatisation and cheap North Sea gas.

In the third and final part of this series, I will consider how this history has constrained the UK’s faltering post 2008 effort to revive a nuclear power industry, and what the future might hold.

Sources

For the history of the UK’s nuclear programme, both civil and military, I have relied heavily on: An Atomic Empire: A Technical History Of The Rise And Fall Of The British Atomic Energy Programme, by Charles Hill (2013)

Churchill’s Bomb, by Graham Farmelo (2013) is very illuminating on the early history of the UK’s atomic weapons programme, and on the troubled post-war nuclear relationship between the UK and USA.

On the technical details of nuclear reactors, Nuclear power technology. Volume 1. Reactor technology, edited by Walter Marshall (OUP, 1983) is still very clear. Marshall was Chair of the UK Atomic Energy Authority, then Chief Executive of the Central Electricity Generating Board, and most of the contributors worked for the UKAEA, so in addition to its technical value, the tone of the book gives some flavour of the prevailing opinion in the UK nuclear industry at the time.

On Sir Alan Cottrell’s opposition to PWRs on safety grounds, see his biographical memoir. This also provides an interesting glimpse at how intimately linked the worlds of academia, government scientific advice, and the UK’s nuclear programme (with the occasional incursion by Royalty) were in the 1960s and 70s.

Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

Deep decarbonisation is still a huge challenge

In 2019 I wrote a blogpost called The challenge of deep decarbonisation, stressing the scale of the economic and technological transition implied by a transition to net zero by 2050. I think the piece bears re-reading, but I wanted to update the numbers to see how much progress we had made in 4 years (the piece used the statistics for 2018; the most up-to-date current figures are for 2022). Of course, in the intervening four years we have had a pandemic and global energy price spike.

The headline figure is that the fossil fuel share of our primary consumption has fallen, but not by much. In 2018, 79.8% of our energy came from oil, gas and coal. In 2022, this share was 77.8%.

There is good news – if we look solely at electrical power generation, generation from hydro, wind and solar was up 32% 2018-2022, from 75 TWh to 99 TWh. Now 30.5% of our electricity production comes from renewables (excluding biomass, which I will come to later).

The less good news is that electrical power generation from nuclear is down 27%, from 65 TWh to 48 TWh, and this now represents just 14.7% of our electricity production. The increase in wind & solar is a real achievement – but it is largely offset by the decline in nuclear power production. This is the entirely predictable result of the AGR fleet reaching the end of its life, and the slow-motion debacle of the new nuclear build program.

The UK had 5.9 GW of nominal nuclear generation capacity in 2022. Of this, all but Sizewell B (1.2 GW) will close by 2030. In the early 2010’s, 17 GW of new nuclear capacity was planned – with the potential to produce more than 140 TWh per year. But, of these ambitious plans, the only project that is currently proceeding is Hinkley Point, late and over budget. The best we can hope for is that in 2030 we’ll have Hinkley’s 3.2 GW, which together with Sizewell B’s continuing operation could produce at best 38 TWh a year.

In 2022, another 36 TWh of electrical power – 11% – came from thermal renewables – largely burning imported wood chips. This supports a claim that more than half (56%) of our electricity is currently low carbon. It’s not clear, though, that imported biomass is truly sustainable or scaleable.

It’s easy to focus on electrical power generation. But – and this can’t be stressed too much – most of the energy we use is in the form of directly burnt gas (to heat our homes) and oil (to propel our cars and lorries).

The total primary energy we used in 2022 was 2055 TWh; and of this 1600 TWh was oil, gas and coal. 280 TWh (mostly gas) was converted into electricity (to produce 133 TWh of electricity), and 60 TWh’s worth of fossil fuel (mostly oil) was diverted into non-energy uses – mostly feedstocks for the petrochemical industry – leaving 1260 TWh to be directly burnt.

To achieve our net-zero target, we need to stop burning gas and oil, and instead use electricity. This implies a considerable increase in the amount of electricity we generate – and this increase all needs to come from low-carbon sources. There is good news, though – thanks to the second law of thermodynamics, we can convert electricity more efficiently into useful work than we can by burning fuels. So the increase in electrical generation capacity in principle can be a lot less than this 1260 TWh per year.

Projecting energy demand into the future is uncertain. On the one hand, we can rely on continuing improvements in energy efficiency from incremental technological advances; on the other, new demands on electrical power are likely to emerge (the huge energy hunger of the data centres needed to implement artificial intelligence being one example). To illustrate the scale of the problem, let’s consider the orders of magnitude involved in converting the current major uses of directly burnt fossil fuels to electrical power.

In 2022, 554 TWh of oil were used, in the form of petrol and diesel, to propel our cars and lorries. We do use some electricity directly for transport – currently just 8.4 TWh. A little of this is for trains (and, of course, we should long ago have electrified all intercity and suburban lines), but the biggest growth is for battery electrical vehicles. Internal combustion engines are heat engines, whose efficiency is limited by Carnot, whereas electric motors can in principle convert all inputted electrical energy into useful work. Very roughly, to replace the energy demands of current cars and lorries with electric vehicles would need another 165 TWh/year of electrical power.

The other major application of directly burnt fossil fuels is for heating houses and offices. This used 334 TWh/year in 2022, mostly in the form of natural gas. It’s increasingly clear that the most effective way of decarbonising this sector is through the installation of heat pumps. A heat pump is essentially a refrigerator run backwards, cooling the outside air or ground, and heating up the interior. Here the second law of thermodynamics is on our side; one ends up with more heat out than energy put in, because rather than directly converting electricity into heat, one is using it to move heat from one place to another.

Using a reasonable guess for the attainable, seasonally adjusted “coefficient of performance” for heat pumps, one might be able to achieve the same heating effect as we currently get from gas boilers with another 100 TWh of low carbon electricity. This figure could be substantially reduced if we had a serious programme of insulating old houses and commercial buildings, and were serious about imposing modern energy efficiency standards for new ones.

So, as an order of magnitude, we probably need to roughly double our current electricity generation capacity from its current value of 320 TWh/year, to more than 600 TWh/year. This will take big increases in generation from wind and solar, currently running around 100 TWh/year. In addition to intermittent renewables, we need a significant fraction of firm power, which can always be relied on, whatever the state of wind and sunshine. Nuclear would be my favoured source for this, so that would need a big increase from the 40 TWh/year we’ll have in place by 2030. The alternative would be to continue to generate electricity from gas, but to capture and store the carbon dioxide produce. For why I think this is less desirable for power generation (though possibly necessary for some industrial processes), see my earlier piece: Carbon Capture and Storage: technically possible, but politically and economically a bad idea.

Industrial uses of energy, which currently amount to 266 TWh, are a mix of gas, electricity and some oil. Some of these applications (e.g. making cement and fertiliser) are going to be rather hard to electrify, so, in addition to requiring carbon capture and storage, this may provide a demand for hydrogen, produced from renewable electricity, or conceivably process heat from high temperature nuclear reactors.

It’s also important to remember that a true reckoning of our national contribution to climate change would include taking account of the carbon dioxide produced in the goods and commodities we import, and our share of air travel. This is very significant, though hard to quantify – in my 2019 piece, I estimated that this could add as much as 60% to our personal carbon budget.

To conclude, we know what we have to do:

  • Electrify everything we can (heat pumps for houses, electric cars), and reduce demand where possible (especially by insulating houses and offices);
  • Use green hydrogen for energy intensive industry & hard to electrify sectors;
  • Hugely increase zero carbon electrical generation, through a mix of wind, solar and nuclear.

In each case, we’re going to need innovation, focused on reducing cost and increasing scale.

There’s a long way to go!

All figures are taken from the UK Government’s Digest of UK Energy Statistics, with some simplification and rounding.

2022 Books roundup

2022 was a thoroughly depressing year; here are some of the books I’ve read that have helped me (I hope) to put last year’s world events in some kind of context.

Helen Thompson could not have been luckier – or, perhaps, more farsighted – in the timing of her book’s release. Disorder: hard times in the 21st century is a survey of the continuing influence of fossil fuel energy on geopolitics, so couldn’t be more timely, given the impact of Russia’s invasion of Ukraine on natural gas and oil supplies to Western Europe and beyond. The importance of securing national energy supplies runs through history of the world in the 20th century in both peace and war; we continue to see examples of the deeply grubby political entanglements the need for oil has drawn Western powers into. All this, by the way, provides a strong secondary argument, beyond climate change, for accelerating the transition to low carbon energy sources.

The presence of large reserves of oil in a country isn’t an unmixed blessing – we’re growing more familiar with the idea of a “resource curse”, blighting both the politics and long term economic prospects of countries whose economies depend on exploiting natural resources. Alexander Etkind’s book Natures Evil: a cultural history of natural resources is a deep history of how the materials we rely on shape political economies. It has a Eurasian perspective that is very timely, but less familiar to me, and takes the idea of a resource curse much further back in time, covering furs and peat as well as the more familiar story of oil.

With more attention starting to focus on the world’s other potential geopolitical flashpoint – the Taiwan Straits – Chris Miller’s Chip War: the fight for the world’s most critical technology – is a great explanation of why Taiwan, through the semiconductor company TSMC, came to be so central to the world’s economy. This book – which has rightly won glowing reviews – is a history of the ubiquitous chip – the silicon integrated circuits that make up the memory and microprocessor chips at the heart of computers, mobile phones – and, increasingly, all kinds of other durable goods, including cars. The focus of the book is on business history, but it doesn’t shy away from the crucial technical details – the manufacturing processes and the tools that enable them, notably the development of extreme UV lithography and the rise of the Dutch company ASML. Excellent though the book is, its business focus did make me reflect that (as far as I’m aware) there’s a huge gap in the market for a popular science book explaining how these remarkable technologies all work – and perhaps speculating on what might come next.

Slouching to Utopia: an economic history of the 20th century, by Brad DeLong, is an elegy for a period of unparalleled technological advance and economic growth that seems, in the last decade, to have come to an end. For DeLong, it was the development of the industrial R&D laboratory towards the end of the 19th century that launched a long century, from 1870-2010, of unparalleled growth in material prosperity. The focus is on political economy, rather than the material and technological basis of growth (for the latter, Vaclav Smil’s pair of books Creating the Twentieth Century and Transforming the Twentieth Century are essential). But there is a welcome focus on the material substrate of information and communication technology rather than the more visible world of software (in contrast, for example, to Robert Gordon’s book The Rise and Fall of American Growth, which I reviewed rather critically here).

Though I am very sympathetic to many of the arguments in the book, ultimately it left me somewhat disappointed. Having rightly stressed the importance of industrial R&D as the driver of the technological change, this theme was not really strongly developed, with little discussion of the changing institutional landscape of innovation around the world. I also wish the book had a more rigorous editor – the prose lapses on occasion into self-indulgence and the book would have been better had it been a third shorter.

In contrast, Vaclav Smil’s latest book – How the World Really Works: A Scientist’s Guide to Our Past, Present and Future – clearly had an excellent editor. It’s a very compelling summary of a couple of decades of Smil’s prolific output. It’s not a boast about my own learning to say that I knew pretty much everything in this book before I read it; simply a consequence of having read so many of Smil’s previous, more academic books. The core of Smil’s argument is to stress, through quantification, how much we depend on fossil fuels, for energy, for food (through the Haber-Bosch process), and for the basic materials that underlie our world – ammonia, plastics, concrete and steel. These chapters are great, forceful, data-heavy and succinct, though the chapter on risk is less convincing.

Despite the editor, Smil’s own voice comes through strongly, sceptical, occasionally curmudgeonly, laying out the facts, but prone to occasional outbreaks of scathing judgement (he really dislikes SUVs!). Perhaps he overdoes the pessimism about the speed with which new technology can be introduced, but his message about the scale and the wrenching impact of the transition we need to go through, to move away from our fossil fuel economy, is a vital one.

From self-stratifying films to levelling up: A random walk through polymer physics and science policy

After more than two and a half years at the University of Manchester, last week I finally got round to giving an in-person inaugural lecture, which is now available to watch on Youtube. The abstract follows:

How could you make a paint-on solar cell? How could you propel a nanobot? Should the public worry about the world being consumed by “grey goo”, as portrayed by the most futuristic visions of nanotechnology? Is the highly unbalanced regional economy of the UK connected to the very uneven distribution of government R&D funding?

In this lecture I will attempt to draw together some themes both from my career as an experimental polymer physicist, and from my attempts to influence national science and innovation policy. From polymer physics, I’ll discuss the way phase separation in thin polymer films is affected by the presence of surfaces and interfaces, and how in some circumstances this can result in films that “self-stratify” – spontaneously separating into two layers, a favourable morphology for an organic solar cell. I’ll recall the public controversies around nanotechnology in the 2000s. There were some interesting scientific misconceptions underlying these debates, and addressing these suggested some new scientific directions, such as the discovery of new mechanisms for self-propelling nano- and micro- scale particles in fluids. Finally, I will cover some issues around the economics of innovation and the UK’s current problems of stagnant productivity and regional inequality, reflecting on my experience as a scientist attempting to influence national political debates.

Lessons from the gas price spike

On April 1st this year, the average UK household will see its annual energy bills rise from £1,277 to around £2,000 a year, according to the Resolution Foundation. After 10 years of stagnant wages – this itself a result of the ongoing productivity growth slowdown, there’s a clamour for some kind of short term fix for a potential political crisis, made worse by a forthcoming tax rise. Even more ominously, an unfolding geopolitical crisis over a conflict between Russia and Ukraine may interact with this energy crisis in a potentially far-reaching way, as we shall see.


UK gas and electricity spot prices (monthly rolling average of “day-ahead” prices). Data: OFGEM

My first plot shows the scale of the crisis. This shows the wholesale, spot prices of gas and electricity since 2010. I don’t want to dwell here on the dysfunctional features of the UK’s retail energy market that have led to the failure of a number of suppliers, or to look at the short-term issues that have exacerbated a current supply squeeze. Instead, it’s worth looking at the longer term implications for the UK’s energy security of this episode of market disruption, and to try to understand how we have been led to this state by global changes in energy markets and UK policy decisions over decades.

Natural gas matters existentially for the UK’s economy, because 40% of the UK’s demand for energy is met by gas, and without sufficient supplies of energy, a modern economy and society cannot function. The price of electricity is strongly coupled to the price of gas, because 34% of our electricity (in 2020) was generated in gas-fired power stations, compared to 15% from nuclear and 23% wind. But generating electricity only accounts for 29% of our total demand for gas. The biggest fraction – 37% – is used for heating our houses, with another 12% is directly burnt in industry, to make fertiliser, cement and in many other processes.

To understand why the wholesale price of gas matters so much, we need to understand a couple of ways in which the UK’s energy landscape has changed in the last twenty years. The first – the UK’s own balance between production and consumption – is shown in the next plot. Since 2004, the UK has gone from being self-sufficient in gas to being a substantial importer. Production of North Sea gas – like North Sea oil – peaked in the early 2000s, and has since rapidly dropped off, as the gas fields most easily and cheaply exploited have been exhausted.


Gas production and consumption in the UK. Data: Digest of UK Energy Statistics 2021, table 4.1.

The second consideration is the nature of the international gas market. A few decades ago, natural gas was a commodity that was used close to where it was produced – it could not be traded globally. But since then an infrastructure has been developed to transport natural gas over long distances; a network of intercontinental pipelines have been built, so gas produced, for example, in Arctic Siberia can be transported to markets in Western Europe. And the technology for shipping liquified natural gas in bulk has been developed, allowing gas from the huge fields in Qatar and Australia, and from the USA’s shale gas industry, to be taken to terminals across the world. This means that a worldwide gas market has been developed, tending to equalise prices across the world. A liquified natural gas tanker can leave Qatar, the USA or Australia and choose to take its cargo to wherever the price it can fetch is highest.

The combination of the UK’s dependency on gas imports means that the prices UK households and industry have to pay for energy reflect supply and demand on a global scale. My next plot shows how global demand has changed over the last couple of decades. The UK’s demand has held steady – the UK’s “dash for gas” represented an early energy transition from extensive use of coal to natural gas. This was a positive change that has reduced the UK’s emissions of greenhouse gases. Now other countries are following in the UK’s footsteps – again, a positive development for overall world greenhouse gas emissions, but putting huge upward pressure on gas supplies. This stresses that the UK is a minor player in world gas markets; its consumption accounts for about 2% of world demand.


World gas consumption by continent, together with China and UK. Data: US Energy Information Administration

Where is this gas coming from? The largest net exporter, as shown in my next plot, is Russia. There’s an ominous echo of the 1970’s and its linked energy, economic and political crises, as dominant energy suppliers realise that withholding energy exports can be a powerful weapon in geopolitical conflicts. As it happens, the UK’s gas imports come primarily from Norway, by pipeline, and Qatar, through LNG imports by ship. But this doesn’t mean that the UK won’t be affected if Russia chooses to exert pressure on Europe by throttling back gas exports. There’s a global market – if Russia cuts off supplies to Germany and Central Europe, Germany will seek to replace that by buying gas from Norway and on the world LNG market, and the prices the UK has to pay will rocket.


Top gas net exporters (i.e. exports less imports).Data: US Energy Information Agency

What should the UK do about this energy crisis?

We can discount straight away the suggestion made by veteran Thatcherite and Eurosceptic MP, Sir John Redwood, that the UK should simply produce more gas of its own. The UK is a small-scale participant in a global market. Even doubling its gas production would make no impact on the global balance of supply and demand, so prices would be unaffected. It’s true that if the gas was produced by a government-owned organisation, the rent – the difference between the market price and cost of production – would be captured by the UK state rather than having to be handed over to the governments of major exporters like Qatar, Norway and Russia. But British Gas was privatised in 1986.

The reason the UK ran down its production was that governments in the 1980’s made a conscious decision that energy should be left to the market, and the market said that it was cheaper to import gas than to produce it from the North Sea (and even more so than to develop a fracking industry in Sussex and the rural Pennines). One can’t help getting the impression that UK politicians like John Redwood are in revolt against the consequences of the national economic settlement that they themselves created.

In fact, there is nothing fundamental the UK can do now apart from strengthen the social safety net for the poorest households, accepting the pressure to increase taxes this leads to. Less politically visible, but nonetheless important, is the pressure high gas costs will put on energy-using industries. The reality is that, as a net importer of energy, higher gas prices inevitably lead to a real loss of national income. Energy infrastructures take many years to build, so all we can do now is look back at the things the UK should have done a decade ago, and learn from those mistakes so that we are in a better position a decade on from now.

What the UK should have done is to reduce the demand for gas through an aggressive pursuit of energy efficiency measures, and to increase the diversity of its energy sources by accelerating the development of other forms of (low-carbon) electricity generation. It failed on both fronts.

In 2013, the Coalition government reduced spending on energy efficiency measures as part of a campaign to “cut the green crap”; the result was a precipitous drop in measures such as cavity wall insulation and loft insulation. In 2015, the zero-carbon homes standard was scrapped, with the result that new housing was built to lower standards of energy efficiency. Recall that 37% of the UK’s gas demand is for domestic heating, so the UK’s poor standards of home energy efficiency translate directly into increased demand – and, with the current high prices, higher bills for consumers. “Cutting the green crap” turned out to be a costly mistake.

It is true that the UK has brought on-stream a significant amount of offshore wind capacity. However, too much of this capacity has been offset by the decline of the UK’s existing nuclear fleet, now approaching the end of its life. The UK government has committed to a programme of nuclear new build, but this programme has stalled. In 2013, I wrote that the nuclear new build programme was “too expensive, too late”, and everything that has happened since has born that diagnosis out.

There’s a more general lesson to learn from the current gas price spike. For some decades, the fundamental underpinning of the UK’s energy policy is that the market should be left to find the cheapest way of delivering the energy the nation needs. In the last decade, the government has intervened extensively in that market to promote one policy objective or another. We’ve seen contracts for difference, capacity markets, renewable obligation certificates – the purity of a free market has long since been left behind. But there’s still an underlying assumption that someone will be running a spreadsheet to calculate a net present value for any new energy investment.

Cost discipline does matter, but it’s important to recognise that these calculations, for investments that will be generating income for multiple decades, rest on projections of market conditions running many years in the future. But what this current episode should tell us is that the future course of energy markets is beset by what the economists call “Knightian uncertainty”. On the reliability of predictions of future energy prices, the lesson of the past, reinforced by what’s happening to gas prices now, is that no-one knows anything.

Energy can’t be left to the market, because the future state of the market is unknowable – but the need for energy is an inescapable ingredient of a modern economy and society. For something that is so important, building resilience into the system may be more important than maximising some notional net present value whose calculation depends on guesses about the state of the world over decades. This is even more true when we factor in the externalities imposed by the effect of fossil fuels on climate change, whose cost and impact remains so uncertain. To be more positive, there are uncertainties on the upside – the reductions in cost that an aggressive programme of low carbon research, development and deployment-driven innovation could bring. Rather than relying entirely on market forces, we have to design a resilient zero carbon energy system and get on with building it out.

Fighting Climate Change with Food Science

The false claim that US President Biden’s Climate Change Plan would lead to hamburger rationing has provided a predictably useful attack line for his opponents. But underlying this further manifestation of the polarisation of US politics, there is a real issue – producing the food we eat does produce substantial greenhouse gas emissions, and a disproportionate amount of these emissions come from eating the meat of ruminants like cattle and sheep.

According to a recent study, US emissions from the food system amount to 5 kg a person a day, and 47% of this comes from red meat. Halving the consumption of animal products by would reduce the USA’s greenhouse gas emissions by about 200 million tonnes of CO2 equivalent, a bit more than 3% of the total value. In the UK, the official Climate Change Committee recommends that red meat consumption should fall by 20% by 2050, as part of the trajectory towards net zero greenhouse gas emissions by 2050, with a 50% decrease necessary if progress isn’t fast enough in other areas. At the upper end of the range possibilities, a complete global adoption of completely animal-free – vegan – diets has been estimated to reduce total global greenhouse gas emissions by 14%.

The political reaction to the false story about Biden’s climate change plan illustrates why a global adoption of veganism isn’t likely to happen any time soon, whatever its climate and other advantages might be. But we should be trying to reduce meat consumption, and it’s worth asking whether the development of better meat substitutes might be part of the solution. We are already seeing “plant-based” burgers in the supermarkets and fast food outlets, while more futuristically there is excitement about using tissue culture techniques to produce in vitro, artificial or lab-grown meat. Is it possible that we can use technology to keep the pleasure of eating meat while avoiding its downsides?

I think that simulated meat has huge potential – but that this is more likely to come from the evolution of the currently relatively low-tech meat substitutes rather than the development of complex tissue engineering approaches to cultured meat [1]. As always, economics is going to determine the difference between what’s possible in principle and what is actually likely to happen. But I wonder whether relatively small investments in the food science of making meat substitutes could yield real dividends.

Why is eating meat important to people? It’s worth distinguishing three reasons. Firstly, meat does provide an excellent source of nutrients (though with potential adverse health effects if eaten to excess). Secondly, It’s a source of sensual pleasure, with a huge accumulated store of knowledge and technique about how to process and cook it to produce the most delicious results. Finally, eating meat is freighted with cultural, religious and historical significance. What kind of meat one’s community eats (or indeed, if it it eats meat at all), when families eat or don’t eat particular meats, all of these have deep historical roots. In many societies access to abundant meat is a potent signifier of prosperity and success, both at the personal and national level. It’s these factors that make calls for people to change their diets so political sensitive to this day.

So how is it realistic to imagine replacing meat with a synthetic substitute? The first issue is easy – replacing meat with foods of plant origin of equivalent nutritional quality is straightforward. The third issue is much harder – cultural change is difficult, and some obvious ways of eliminating meat run into cultural problems. A well-known vegetarian cookbook of my youth was called “Not just a load of old lentils” – this was a telling, but not entirely successful attempt to counteract an unhelpful stereotype head-on. So perhaps the focus should be on the second issue. If we can produce convincing simulations of meat that satisfy the sensual aspects and fit into the overall cultural preconceptions of what a “proper” meal looks like – in the USA or the UK, burger and fries, or a roast rib of beef – maybe we can meet the cultural issue halfway.

So what is meat, and how can we reproduce it? Lean meat consists of about 75% water, 20% protein and 3% fat. If it was just a question of reproducing the components, synthetic meat would be easy. An appropriate mixture of, say, wheat protein and pea protein (a mixture is needed to get all the necessary amino acids), some vegetable oil, and some trace minerals and vitamins, dispersed in water would provide all the nutrition that meat does. This would be fairly tasteless, of course – but given the well developed modern science of artificial flavours and aromas, we could fairly easily reproduce a convincing meaty broth.

But this, of course, misses out the vital importance of texture. Meat has a complex, hierarchical structure, and the experience of eating it reflects the way that structure is broken down in the mouth and the time profile of the flavours and textures it releases. Meat is made from animal muscle tissue, which develops to best serve what that particular muscle needs to do for the animal in its life. The cells in muscle are elongated to make fibres; the fibres bundle together to create the grain that’s familiar when we cut meat, but they also need to incorporate the connective tissue that allows the muscle to exert forces on the animal’s bones, and the blood-carrying vascular system that conveys oxygen and nutrients to the working muscle fibres. All of this influences the properties of the tissue when it becomes meat. The connective tissue is dominated by the protein material collagen, which consists of long molecules tightly bound together in triple helices.

Muscles that do a lot of work – like the lower leg muscles that make up the beef cuts known as shin or leg – have a lot of connective tissue. These cuts of meat are very tough, but after long cooking at low temperatures the collagen breaks down; the triple helices come apart, and the separated long molecules give a silky texture to the gravy, enhanced by the partial reformation of the helical junctions as it cools. In muscles that do less work – like the underside of the loin that forms the fillet in beef – there is much less connective tissue, and the meat is very tender even without long cooking.

High temperature grilling creates meaty flavours through a number of complex chemical reactions known as Maillard reactions, which are enhanced in the presence of carbohydrates in the flour and sugar that are used for barbecue marinades. Other flavours are fat soluble, carried in the fat cells characteristic of meat from well-fed animals that develop “marbling” of fat layers in the lean muscle. All of these characteristics are developed in the animal reflecting the life it leads before slaughter, and are developed further after butchering, storage and cooking.

In “cultured” meat, individual precursor cells derived from an animal are grown in a suitable medium, using a “scaffold” to help the cells organise to form something resembling natural muscle tissue. There a a couple of key technical issues with this. The first is the need to provide the right growth medium for the cells, to provide an energy source, other nutrients, and the growth factors that simulate the chemical communications between cells in whole organisms.

In the cell culture methods that have been developed for biomedical applications, the starting point for these growth media has been sera extracted from animal sources like cows. These are expensive – and obviously can’t produce an animal free product. Serum free growth media have been developed but are expensive, and optimising, scaling up and reducing the cost of these represent key barriers to be overcome to make “cultured meat” viable.

The second issue is reproducing the vasculature of real tissue, the network of capillaries that conveys nutrients to the cells. It’s this that makes it much easier to grow a thin layer of cells than to make a thick, steak-like piece. Hence current proofs of principle of cultured meat are more likely to produce mince meat for burgers rather than whole cuts.

I think there is a more fundamental problem in making the transition from cells, to tissue, to meat. One can make a three dimensional array of cells using a “scaffold” – a network of some kind of biopolymer that the cells can attach to and which guides their growth in the way that a surface does in a thin layer. But we know that the growth of cells is influenced strongly by the mechanical stimuli they are exposed to. This is obvious at the macroscopic scale – muscles that do more work, like leg muscles, grow in a different way that ones that do less – hence the difference between shin of beef and fillet steak. I find it difficult to see how, at scale, one could reproduce these effects in cell culture in a way that produces something that looks more like a textured piece of meat rather than a vaguely meaty mush.

I think there is a simpler approach, which builds on the existing plant-based substitutes for meat already available in the supermarket. Start with a careful study of the hierarchical structures of various meats, at scales from the micron to the millimetre, before and after cooking. Isolate the key factors in the structure that produce a particular hedonic response – e.g. the size and dispersion of the fat particles, and their physical state; the arrangement of protein fibres, the disposition of tougher fibres of connective tissue, the viscoelastic properties of the liquid matrix and so on. Simulate these structures using plant derived materials – proteins, fats, gels with different viscoelastic properties to simulate connective tissue, and appropriate liquid matrices, devising processing routes that use physical processes like gelation and phase separation to yield the right hierarchical structure in a scalable way. Incorporate synthetic flavours and aromas in controlled release systems localised in different parts of the structure. All this is a development and refinement of existing food technology.

At the moment, attempting something like this, we have start-ups like Impossible Burger and Beyond Meat, with new ideas and some distinct intellectual property. There are established food multinationals, like Unilever, moving in with their depth of experience in branding, distribution and deep food science expertise. We already have products, many of which are quite acceptable in the limited market niches they are aiming at (typically minced meat for burgers and sauces). We need to move now to higher value and more sophisticated products, closer to whole cuts of meat. To do this we need some more basic food science research, drawing on the wide academic base in the life sciences, and integrating this with the chemical engineering for making soft matter systems with complex heterogenous structures at scale, often by non-equilibrium self-assembly processes.

Food science is currently rather an unfashionable area, with little funding and few institutions focusing on it (for example, the UK’s former national Institute of Food Research in Norwich has pivoted away from classical food science to study the effect of the microbiome on human health). But I think the case for doing this is compelling. The strong recent rise in veganism and vegetarianism creates a large and growing market. But it does need public investment, because I don’t think intellectual property in this area will be very easy to defend. For this reason, large R&D investments by individual companies alone may be difficult to justify. Instead we need consortia bringing together multinationals like Unilever and players further downstream in the supply chain, like the manufacturers of ready meals and suppliers to fast food outlets, together with a relatively modest increase in public sector applied research. Food science may not be as glamorous as a new approach to nuclear fusion, but maybe turn out to be just as important in the fight against climate change.

[1]. See also this interesting article by Alex Smith and Saloni Shah – The Government Needs an Innovation Policy for Alternative Meats – which makes the case for an industrial strategy for alternative meats, but is more optimistic about the prospects for cell culture than I am.

Measuring up the UK Government’s ten-point plan for a green industrial revolution

Last week saw a major series of announcements from the government about how they intend to set the UK on the path to net zero greenhouse gas emissions. The plans were trailed in an article (£) by the Prime Minister in the Financial Times, with a full document published the next day – The ten point plan for a green industrial revolution. “We will use Britain’s powers of invention to repair the pandemic’s damage and fight climate change”, the PM says, framing the intervention as an innovation-driven industrial strategy for post-covid recovery. The proposals are patchy, insufficient by themselves – but we should still welcome them as beginning to recognise the scale of the challenge. There is a welcome understanding that decarbonising the power sector is not enough by itself. The importance of emissions from transport, industry and domestic heating are all recognised, and there is a nod to the potential for land-use changes to play a significant role. The new timescale for the phase-out of petrol and diesel cars is really significant, if it can be made to stick. So although I don’t think the measures yet go far enough or fast enough, one can start to see the outline of what a zero-emission economy might look like.

In outline, the emerging picture seems to be of a power sector dominated by offshore wind, with firm power provided either by nuclear or fossil fuels with carbon capture and storage. Large scale energy storage isn’t mentioned much, though possibly hydrogen could play a role there. Vehicles will predominantly be electrified, and hydrogen will have a role for hard to decarbonise industry, and possibly domestic heating. Some hope is attached to the prospect for more futuristic technologies, including fusion and direct air capture.

To move on to the ten points, we start with a reassertion of the Manifesto commitment to achieve 40 GW of offshore wind installed by 2030. How much is this? At a load factor of 40%, this would produce 140 TWh a year; for comparison, in 2019, we used a total 346 TWh of electricity. Even though this falls a long way short of what’s needed to decarbonise power, a build out of offshore wind on this scale will be demanding – it’s a more than four-fold increase on the 2019 capacity. We won’t be able to expand the capacity of offshore wind indefinitely using current technology – ultimately we will run out of suitable shallow water sites. For this reason, the announcement of a push for floating wind, with a 1 GW capacity target, is important.

On hydrogen, the government is clearly keen, with the PM saying “we will turn water into energy with up to £500m of investment in hydrogen”. Of course, even this government’s majority of 80 isn’t enough to repeal the laws of thermodynamics; hydrogen can only be an energy store or vector. As I’ve discussed in an earlier post (The role of hydrogen in reaching net zero), hydrogen could have an important role in a low carbon energy system, but one needs to be clear about how the hydrogen is made in a zero-carbon way, and how it is used, and this plan doesn’t yet provide that clarity.

The document suggests the first use will be in a natural gas blend for domestic heating, with a hint that it could be used in energy intensive industry clusters. The commitment is to create 5 GW of low carbon hydrogen production capacity by 2030. Is this a lot? Current hydrogen production amounts to 3 GW (27 TWh/year), used in industry and (especially) for making fertiliser, though none of this is low carbon hydrogen – it is made from natural gas by steam methane reforming. So this commitment could amount to building another steam reforming methane plant and capturing the carbon dioxide – this might be helpful for decarbonising industry, on on Deeside or Teeside perhaps. To give a sense of scale, total natural gas consumption in industry and homes (not counting electricity generation) equates to 58 GW (512 TWh/year), so this is no more than a pilot. In the longer term, making hydrogen by electrolysis and/or process heat from high temperature fission is more likely to be the scalable and cost-effective solution, and it is good that Sheffield’s excellent ITM Power gets a namecheck.

On nuclear power, the paper does lay out a strategy, but is light on the details of how this will be executed. For more detail on what I think has gone wrong with the UK’s nuclear strategy, and what I think should be done, see my earlier blogpost: Rebooting the UK’s nuclear new build programme. The plan here seems to be for one last heave on the UK’s troubled programme of large scale nuclear new build, followed up by a possible programme implementing a light water small modular reactor, with research on a new generation of small, high temperature, fourth generation reactors – advanced modular reactors (AMRs). There is a timeline – large-scale deployment of small modular reactors in the 2030’s, together with a demonstrator AMR around the same timescale. I think this would be realistic if there was a wholehearted push to make it happen, but all that is promised here is a research programme, at the level of £215 m for SMRs and £170m for AMRs, together with some money for developing the regulatory and supply chain aspects. This keeps the programme alive, but hardly supercharges it. The government must come up with the financial commitments needed to start building.

The most far-reaching announcement here is in the transport section – a ban on sales of new diesel and petrol car sales after 2030, with hybrids being permitted until 2035, after which only fully battery electric vehicles will be on sale. This is a big deal – a major effort will be required to create the charging infrastructure (£1.3 bn is ear-marked for this), and there will need to be potentially unpopular decisions on tax or road charging to replace the revenue from fuel tax. For heavy goods vehicles the suggestion is that we’ll have hydrogen vehicles, but all that is promised is R&D.

For public transport the solutions are fairly obvious – zero-emission buses, bikes and trains – but there is a frustrating lack of targets here. Sometimes old technologies are the best – there should be a commitment to electrify all inter-city and suburban lines as fast as feasible, rather than the rather vague statement that “we will further electrify regional and other rail routes”.

In transport, though, it’s aviation that is the most intractable problem. Three intercontinental trips a year can double an individual’s carbon footprint, but it is very difficult to see how one can do without the energy density of aviation fuel for long-distance flight. The solutions offered look pretty unconvincing to me – “we are investing £15 million into FlyZero – a 12-month study, delivered through the Aerospace Technology Institute (ATI), into the strategic, technical and commercial issues in designing and developing zero-emission aircraft that could enter service in 2030.” Maybe it will be possible to develop an electric aircraft for short-haul flights, but it seems to me that the only way of making long-distance flying zero-carbon is by making synthetic fuels from zero-carbon hydrogen and carbon dioxide from direct air capture.

It’s good to see the attention on the need for greener buildings, but here the government is hampered by indecision – will the future of domestic heating be hydrogen boilers or electric powered heat pumps? The strategy seems to be to back both horses. But arguably, even more important than the way buildings are heated is to make sure they are as energy-efficient as possible in the first place, and here the government needs to get a grip on the mess that is our current building regulation regime. As the Climate Change Committee says, “making a new home genuinely zero-carbon at the outset is around five times cheaper than retrofitting it later” – the housing people will be living in in 2050 is being built today, so there is no excuse for not ensuring the new houses we need now – not least in the neglected social housing sector – are built to the highest energy efficiency standards.

Carbon capture, usage and storage is the 8th of our 10 points, and there is a commendable willingness to accelerate this long-stalled programme. The goal here is “to capture 10Mt of carbon dioxide a year by 2030”, but without a great deal of clarity about what this is for. The suggestion that the clusters will be in the North East, the Humber, North West, and in Scotland and Wales suggests a goal of decarbonising energy intensive sectors, which in my view is the best use of this problematic technology (see my blogpost: Carbon Capture and Storage: technically possible, but politically and economically a bad idea). What’s the scale proposed here – is 10 Mt of carbon a year a lot or a little? Compared to the total CO2 emissions for the UK – 350 Mt in 2019 – it isn’t much, but on the other hand it is roughly in line with the total emissions of the iron and steel industry in the UK, so as an intervention to reduce the carbon intensity of heavy industry it looks more viable. The unresolved issue is who bears the cost.

There’s a nod to the effects of land-use changes, in the section on protecting the natural environment. There are potentially large gains to be had here in projects to reforest uplands and restore degraded peatlands, but the scale of ambition is relatively small.

Finally, the tenth point concerns innovation, with the promise of a “£1 billion Net Zero Innovation Portfolio” as part of the government’s aspiration to raise the UK’s R&D intensity to 2.4% of GDP by 2027. The R&D is to support the goals in the 10 point plan, with a couple of more futuristic bets – on direct air capture, and on commercial fusion power through the Spherical Tokomak for Energy Production project.

I think R&D and innovation are enormously important in the move to net zero. We urgently need to develop zero-carbon technologies to make them cheaper and deployable at scale. My own somewhat gloomy view (see this post for more on this: The climate crisis now comes down to raw power) is that, taking a global view incorporating the entirely reasonable aspiration of the majority of the world’s population to enjoy the same high energy lifestyle that is to be found in the developed world, the only way we will effect a transition to a zero-carbon economy across the world is if the zero-carbon technologies are cheaper – without subsidies – than fossil fuel energy. If those cheap, zero-carbon technologies can be developed in the UK, that will make a bigger difference to global carbon budgets than any unilateral action that affects the UK alone.

But there is an important counter-view, expressed cogently by David Edgerton in a recent article: Cummings has left behind a No 10 deluded that Britain could be the next Silicon Valley. Edgerton describes a collective credulity in the government about Britain’s place in the world of innovation, which overstates the UK’s ability to develop these new technologies, and underestimates the degree to which the UK will be dependent on innovations developed elsewhere.

Edgerton is right, of course – the UK’s political and commentating classes have failed to take on board the degree to which the country has, since the 1980’s, run down its innovation capacity, particularly in industrial and applied R&D. In energy R&D, according to recent IEA figures, the UK spends about $1.335 billion a year – some 4.3% of the world total, eclipsed by the contributions of the USA, China, the EU and Japan.

Nonetheless, $1.3 billion is not nothing, and in my opinion this figure ought to increase substantially both in absolute terms, and as a fraction of rising public investment in R&D. But the UK will need to focus its efforts in those areas where it has unique advantages; while in other areas international collaboration may be a better way forward.

Where are those areas of unique advantage? One such probably is offshore wind, where the UK’s Atlantic location gives it a lot of sea and a lot of wind. The UK currently accounts for about 1/3 of all offshore wind capacity, so it represents a major market. Unfortunately, the UK has allowed the situation to develop where the prime providers of its offshore wind technology are overseas. The plan suggests more stringent targets for local content, and this does make sense, while there is a strong argument that UK industrial strategy should try and ensure that more of the value of the new technologies of deepwater floating wind are captured in the UK.

While offshore wind is being deployed at scale right now, fusion remains speculative and futuristic. The government’s strategy is to “double down on our ambition to be the first country in the world to commercialise fusion energy technology”. While I think the barriers to developing commercial fusion power – largely in materials science – remain huge, I do believe the UK should continue to fund it, for a number of reasons. Firstly, there is a possibility that it might actually work, in which case it would be transformative – it’s a long odds bet with a big potential payoff. But why should the UK be the country making the bet? My answer would be that, in this field, the UK is genuinely internationally competitive; it hosts the Joint European Torus, and the sponsoring organisation UKAEA retains, rare in UK, capacity for very complex engineering at scale. Even if fusion doesn’t deliver commercial power, the technological spillovers may well be substantial.

The situation in nuclear fission is different. The UK dramatically ran down its research capacity in civil nuclear power, and chose instead to develop a new nuclear build programme on the basis of entirely imported technology. This was initially the French EPR currently being built in Hinkley Point, with another another type of pressurised water reactor, from Toshiba, to be built in Cumbria, and a third type of reactor, a boiling water reactor from Hitachi, in Anglesea. That hasn’t worked out so well, with only the EPRs now looking likely to be built. The current strategy envisages a reset, with a new programme of light water small modular reactors – that is to say, a technologically conservative PWR designed with an emphasis on driving its capital cost down, followed by work on a next generation fission reactor. These “advanced modular reactors” would be relatively small high temperature reactor. The logic for the UK to be the country to develop this technology is that it is only country that has run an extensive programme of gas cooled reactors, but it still probably needs collaboration with other like-minded countries.

How much emphasis should the UK put into developing electric vehicles, as opposed to simply creating the infrastructure for them and importing the technology? The automotive sector still remains an important source of added value for the UK, having made an impressive recovery from its doldrums in the 90’s and 00’s. Jaguar Land Rover, though owned by the Indian conglomerate Tata, is still essentially a UK based company, and it has an ambitious development programme for electric vehicles. But even with its R&D budget of £1.8 bn a year, it is a relative minnow by world standards (Volkswagen’s R&D budget is €13bn, and Toyota’s only a little less); for this reason it is developing a partnership with BMW. The government should support the UK industry’s drive to electrify, but care will be needed to identify where UK industry can find the most value in global supply chains.

A “green industrial strategy” is often sold on the basis of the new jobs it will create. It will indeed create more jobs, but this is not necessarily a good thing. If it takes more people, more capital, more money to produce the same level of energy services – houses being heated, iron being smelted, miles driven in cars and lorries – then that amounts to a loss of productivity across the economy as a whole. Of course this is justified by the huge costs that burning fossil fuels impose on the world as a whole through climate change, costs which are currently not properly accounted for. But we shouldn’t delude ourselves. We use fossil fuels because they are cheap, convenient, and easy to use, and we will miss them – unless we can develop new technologies that supply the same energy services at a lower cost, and that will take innovation. New low carbon energy technologies need to be developed, and existing technologies made cheaper and more effective.

To sum up, the ten point plan is a useful step forward, The contours of a zero-emissions future are starting to emerge, and it is very welcome that the government has overcome its aversion to industrial strategy. But more commitment and more realism is required.

The challenge of deep decarbonisation

This is roughly the talk I gave in the neighbouring village of Grindleford about a month ago, as part of a well-attended community event organised by Grindleford Climate Action.

Thanks so much for inviting me to talk to you today. It’s great to see such an impressive degree of community engagement with what is perhaps the defining issue we face today – climate change. What I want to talk about today is the big picture of what we need to do to tackle the climate change crisis.

The title of this event is “Without Hot Air” – I know this is inspired by the great book “Sustainable Energy without the Hot Air”, by the late David McKay. David was a physicist at the University of Cambridge; he wrote this book – which is free to download – because of his frustration with the way the climate debate was being conducted. He became Chief Scientific Advisor to the Department of Energy and Climate Change in the last Labour government, but died, tragically young at 49, in 2016.

His book is about how to make the sums add up. “Everyone says getting off fossil fuels is important”, he says, “and we’re all encouraged to ‘make a difference’, but many of the things that allegedly make a difference don’t add up.“

It’s a book about being serious about climate change, putting into numbers the scale of the problem. As he says “if everyone does a little, we’ll achieve only a little.”

But to tackle climate change we’re going to need to do a lot. As individuals, we’re going to need to change the way we live. But we’re going to need to do a lot collectively too, in our communities, but also nationally – and internationally – through government action.

Net zero greenhouse gas emission by 2050?

The Government has enshrined a goal of achieving net zero greenhouse gas emissions by 2050 in legislation. This is a very good idea – it’s a better target than a notional limit on the global temperature rise, because it’s the level of greenhouse gas emissions that we have direct control over.

But there are a couple of problems.

We’ve emitted a lot of greenhouse gases already, and even if we – we being the whole world here – reach the 2050 target, we’ll have emitted a lot more. So the target doesn’t stop climate change, it just limits it – perhaps to 1.5 – 2° of warming or so.

Even worse, the government just isn’t being serious about doing what would need to be done to reach the target. The trouble is that 2050 sounds a long way off for politicians who think in terms of 5 year election cycles – or, indeed, at the moment, just getting through the next week or two. But it’s not long in terms of rebuilding our economy and society.

Just think how different is the world now to the world in 1990. In terms of the infrastructure of everyday life – the buildings, the railways, the roads – the answer is, not very. I’m not quite driving the same car, but the trains on the Hope Valley Line are the same ones – and they were obsolete then! Most importantly, our energy system is still dominated by hydrocarbons.

I think on current trajectory there is very little chance of achieving net zero greenhouse gas emissions by 2050 – so we’re heading for 3 or 4 degrees of warming, a truly alarming and dangerous prospect. Continue reading “The challenge of deep decarbonisation”

Carbon Capture and Storage: technically possible, but politically and economically a bad idea

It’s excellent news that the UK government has accepted the Climate Change Committee’s recommendation to legislate for a goal of achieving net zero greenhouse emissions by 2050. As always, though, it’s not enough to will the end without attending to the means. My earlier blogpost stressed how hard this goal is going to be to reach in practise. The Climate Change Committee does provide scenarios for achieving net zero, and the bad news is that the central 2050 scenario relies to a huge extent on carbon capture and storage. In other words, it assumes that we will still be burning fossil fuels, but we will be mitigating the effect of this continued dependence on fossil fuels by capturing the carbon dioxide released when gas is burnt and storing it, into the indefinite future, underground. Some use of carbon capture and storage is probably inevitable, but in my view such large-scale reliance on it is, politically and economically, a bad idea.

In the central 2050 net zero scenario, 645 TWh of electricity is generated a year – more than doubled from 2017 value of 300 TWh, reflecting the electrification of sectors like transport. The basic strategy for deep decarbonisation has to be, as a first approximation, to electrify everything, while simultaneously decarbonising power generation: so far, so good.

But even with aggressive expansion of renewable electricity, this scenario still calls for 150 TWh to be generated from fossil fuels, in the form of gas power stations. To achieve zero carbon emissions from this fossil fuel powered electricity generation, the carbon dioxide released when the gas is burnt has to be captured at the power stations and pumped through a specially built infrastructure of pipes to disused gas fields in the North Sea, where it is injected underground for indefinite storage. This is certainly technically feasible – to produce 150 TWh of electricity from gas, around 176 million tonnes of carbon dioxide a year will be produced. For comparison currently about 42 million tonnes of natural gas a year is taken out of the North Sea reservoirs, so reversing the process at four times the scale is undoubtedly doable.

In fact, more carbon capture and storage will be needed than the 176 million tonnes from the power sector, because the zero net greenhouse gas plan relies on it in four distinct ways. In addition to allowing us to carry on burning gas to make electricity, the plan envisages capturing carbon dioxide from biomass-fired power stations too. This should lead to a net lowering of the amount of carbon dioxide in the atmosphere, amounting to a so-called “negative emissions technology”. The idea of these is one offsets the remaining positive carbon emissions from hard to decarbonise sectors like aviation with these “negative emissions” to achieve overall net zero emissions.

Meanwhile the plan envisages the large scale conversion of natural gas to hydrogen, to replace natural gas in industry and domestic heating. One molecule of methane produces two molecules of hydrogen, which can be burnt in domestic boilers without carbon emissions, and one of carbon dioxide, which needs to be captured at the hydrogen plant and pumped away to the North Sea reservoirs. Finally some carbon dioxide producing industrial processes will remain – steel making and cement production – and carbon capture and storage will be needed to render these processes zero carbon. These latter uses are probably inevitable.

But I want to focus on the principal envisaged use of carbon capture and storage – as a way of avoiding the need to move to entirely low carbon electricity, i.e. through renewables like wind and solar, and through nuclear power. We need to take a global perspective – if the UK achieves net zero greenhouse gas status by 2050, but the rest of the world carries on as normal, that helps no-one.

In my opinion, the only way we can be sure that the whole world will decarbonise is if low carbon energy – primarily wind, solar and nuclear – comes in at a lower cost than fossil fuels, without subsidies or other intervention. The cost of these technologies will surely come down: for this to happen, we need both to deploy them in their current form, and to do research and development to improve them. We need both the “learning by doing” that comes from implementation, and the cost reductions that will come from R&D, whether that’s making incremental process improvements to the technologies as they currently stand, or developing radically new and better versions of these technologies.

But we will never achieve these technological improvements and corresponding cost reductions for carbon capture and storage.

It’s always tempting fate to say “never” for the potential for new technologies – but there’s one exception, and that’s when a putative new technology would need to break one of the laws of thermodynamics. No-one has ever come out ahead betting against these.

To do carbon capture and storage will always need additional expenditure over and above the cost of an unabated gas power station. It needs both:

  • up-front capital costs for the plant to separate the carbon dioxide in the first place, infrastructure to pipe the carbon dioxide long distances and pump it underground,
  • lowered conversion efficiencies and higher running costs – i.e. more gas needs to be burnt to produce a given unit of electricity.
  • The latter is an inescapable consequence of the second law of thermodynamics – carbon capture will always need a separation step. Either one needs to take air and separate it into its component parts, taking out the pure oxygen, so one burns gas to produce a pure waste stream consisting of carbon dioxide and water. Or one has to take the exhaust from burning the gas in air, and pull out the carbon dioxide from the waste. Either way, you need to take a mixed gas and separate its components – and that always takes an energy input to drive the loss of entropy that follows from separating a mixture.

    The key point, then, is that no matter how much better our technology gets, power produced by a gas power station with carbon capture and storage will always be more expensive that power from unabated gas. The capital cost of the plant will be greater, and so will the revenue cost per kWh. No amount of technological progress can ever change this.

    So there can only be a business case for carbon capture and storage through significant government interventions in the market, either through a subsidy, or through a carbon tax. Politically, this is an inherently unstable situation. Even after the capital cost of the carbon capture infrastructure has been written off, at any time the plant operator will be able to generate electricity more cheaply by releasing the carbon dioxide produced when the gas is burnt. Taking an international perspective, this leads to a massive free rider problem. Any country will be able to gain a competitive advantage at any time by turning the carbon capture off – there needs to be a fully enforced international agreement to impose carbon taxes at a high enough level to make the economics work. I’m not confident that such an agreement – which would have to cover every country making a significant contribution to carbon emissions to be effective – can be relied to hold on the scale of many decades.

    I do accept that some carbon and capture and storage probably is essential, to capture emissions from cement and steel production. But carbon capture and storage from the power sector is a climate change solution for a world that does not exist any more – a world of multilateral agreements and transnational economic rationality. Any scenario that relies on carbon capture and storage is just a politically very risky way of persuading ourselves that fossil-fuelled business as usual is sustainable, and postponing the necessary large scale implementation and improvement through R&D of genuine low carbon energy technologies – renewables like wind and solar, and nuclear.