Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

Deep decarbonisation is still a huge challenge

In 2019 I wrote a blogpost called The challenge of deep decarbonisation, stressing the scale of the economic and technological transition implied by a transition to net zero by 2050. I think the piece bears re-reading, but I wanted to update the numbers to see how much progress we had made in 4 years (the piece used the statistics for 2018; the most up-to-date current figures are for 2022). Of course, in the intervening four years we have had a pandemic and global energy price spike.

The headline figure is that the fossil fuel share of our primary consumption has fallen, but not by much. In 2018, 79.8% of our energy came from oil, gas and coal. In 2022, this share was 77.8%.

There is good news – if we look solely at electrical power generation, generation from hydro, wind and solar was up 32% 2018-2022, from 75 TWh to 99 TWh. Now 30.5% of our electricity production comes from renewables (excluding biomass, which I will come to later).

The less good news is that electrical power generation from nuclear is down 27%, from 65 TWh to 48 TWh, and this now represents just 14.7% of our electricity production. The increase in wind & solar is a real achievement – but it is largely offset by the decline in nuclear power production. This is the entirely predictable result of the AGR fleet reaching the end of its life, and the slow-motion debacle of the new nuclear build program.

The UK had 5.9 GW of nominal nuclear generation capacity in 2022. Of this, all but Sizewell B (1.2 GW) will close by 2030. In the early 2010’s, 17 GW of new nuclear capacity was planned – with the potential to produce more than 140 TWh per year. But, of these ambitious plans, the only project that is currently proceeding is Hinkley Point, late and over budget. The best we can hope for is that in 2030 we’ll have Hinkley’s 3.2 GW, which together with Sizewell B’s continuing operation could produce at best 38 TWh a year.

In 2022, another 36 TWh of electrical power – 11% – came from thermal renewables – largely burning imported wood chips. This supports a claim that more than half (56%) of our electricity is currently low carbon. It’s not clear, though, that imported biomass is truly sustainable or scaleable.

It’s easy to focus on electrical power generation. But – and this can’t be stressed too much – most of the energy we use is in the form of directly burnt gas (to heat our homes) and oil (to propel our cars and lorries).

The total primary energy we used in 2022 was 2055 TWh; and of this 1600 TWh was oil, gas and coal. 280 TWh (mostly gas) was converted into electricity (to produce 133 TWh of electricity), and 60 TWh’s worth of fossil fuel (mostly oil) was diverted into non-energy uses – mostly feedstocks for the petrochemical industry – leaving 1260 TWh to be directly burnt.

To achieve our net-zero target, we need to stop burning gas and oil, and instead use electricity. This implies a considerable increase in the amount of electricity we generate – and this increase all needs to come from low-carbon sources. There is good news, though – thanks to the second law of thermodynamics, we can convert electricity more efficiently into useful work than we can by burning fuels. So the increase in electrical generation capacity in principle can be a lot less than this 1260 TWh per year.

Projecting energy demand into the future is uncertain. On the one hand, we can rely on continuing improvements in energy efficiency from incremental technological advances; on the other, new demands on electrical power are likely to emerge (the huge energy hunger of the data centres needed to implement artificial intelligence being one example). To illustrate the scale of the problem, let’s consider the orders of magnitude involved in converting the current major uses of directly burnt fossil fuels to electrical power.

In 2022, 554 TWh of oil were used, in the form of petrol and diesel, to propel our cars and lorries. We do use some electricity directly for transport – currently just 8.4 TWh. A little of this is for trains (and, of course, we should long ago have electrified all intercity and suburban lines), but the biggest growth is for battery electrical vehicles. Internal combustion engines are heat engines, whose efficiency is limited by Carnot, whereas electric motors can in principle convert all inputted electrical energy into useful work. Very roughly, to replace the energy demands of current cars and lorries with electric vehicles would need another 165 TWh/year of electrical power.

The other major application of directly burnt fossil fuels is for heating houses and offices. This used 334 TWh/year in 2022, mostly in the form of natural gas. It’s increasingly clear that the most effective way of decarbonising this sector is through the installation of heat pumps. A heat pump is essentially a refrigerator run backwards, cooling the outside air or ground, and heating up the interior. Here the second law of thermodynamics is on our side; one ends up with more heat out than energy put in, because rather than directly converting electricity into heat, one is using it to move heat from one place to another.

Using a reasonable guess for the attainable, seasonally adjusted “coefficient of performance” for heat pumps, one might be able to achieve the same heating effect as we currently get from gas boilers with another 100 TWh of low carbon electricity. This figure could be substantially reduced if we had a serious programme of insulating old houses and commercial buildings, and were serious about imposing modern energy efficiency standards for new ones.

So, as an order of magnitude, we probably need to roughly double our current electricity generation capacity from its current value of 320 TWh/year, to more than 600 TWh/year. This will take big increases in generation from wind and solar, currently running around 100 TWh/year. In addition to intermittent renewables, we need a significant fraction of firm power, which can always be relied on, whatever the state of wind and sunshine. Nuclear would be my favoured source for this, so that would need a big increase from the 40 TWh/year we’ll have in place by 2030. The alternative would be to continue to generate electricity from gas, but to capture and store the carbon dioxide produce. For why I think this is less desirable for power generation (though possibly necessary for some industrial processes), see my earlier piece: Carbon Capture and Storage: technically possible, but politically and economically a bad idea.

Industrial uses of energy, which currently amount to 266 TWh, are a mix of gas, electricity and some oil. Some of these applications (e.g. making cement and fertiliser) are going to be rather hard to electrify, so, in addition to requiring carbon capture and storage, this may provide a demand for hydrogen, produced from renewable electricity, or conceivably process heat from high temperature nuclear reactors.

It’s also important to remember that a true reckoning of our national contribution to climate change would include taking account of the carbon dioxide produced in the goods and commodities we import, and our share of air travel. This is very significant, though hard to quantify – in my 2019 piece, I estimated that this could add as much as 60% to our personal carbon budget.

To conclude, we know what we have to do:

  • Electrify everything we can (heat pumps for houses, electric cars), and reduce demand where possible (especially by insulating houses and offices);
  • Use green hydrogen for energy intensive industry & hard to electrify sectors;
  • Hugely increase zero carbon electrical generation, through a mix of wind, solar and nuclear.

In each case, we’re going to need innovation, focused on reducing cost and increasing scale.

There’s a long way to go!

All figures are taken from the UK Government’s Digest of UK Energy Statistics, with some simplification and rounding.

2022 Books roundup

2022 was a thoroughly depressing year; here are some of the books I’ve read that have helped me (I hope) to put last year’s world events in some kind of context.

Helen Thompson could not have been luckier – or, perhaps, more farsighted – in the timing of her book’s release. Disorder: hard times in the 21st century is a survey of the continuing influence of fossil fuel energy on geopolitics, so couldn’t be more timely, given the impact of Russia’s invasion of Ukraine on natural gas and oil supplies to Western Europe and beyond. The importance of securing national energy supplies runs through history of the world in the 20th century in both peace and war; we continue to see examples of the deeply grubby political entanglements the need for oil has drawn Western powers into. All this, by the way, provides a strong secondary argument, beyond climate change, for accelerating the transition to low carbon energy sources.

The presence of large reserves of oil in a country isn’t an unmixed blessing – we’re growing more familiar with the idea of a “resource curse”, blighting both the politics and long term economic prospects of countries whose economies depend on exploiting natural resources. Alexander Etkind’s book Natures Evil: a cultural history of natural resources is a deep history of how the materials we rely on shape political economies. It has a Eurasian perspective that is very timely, but less familiar to me, and takes the idea of a resource curse much further back in time, covering furs and peat as well as the more familiar story of oil.

With more attention starting to focus on the world’s other potential geopolitical flashpoint – the Taiwan Straits – Chris Miller’s Chip War: the fight for the world’s most critical technology – is a great explanation of why Taiwan, through the semiconductor company TSMC, came to be so central to the world’s economy. This book – which has rightly won glowing reviews – is a history of the ubiquitous chip – the silicon integrated circuits that make up the memory and microprocessor chips at the heart of computers, mobile phones – and, increasingly, all kinds of other durable goods, including cars. The focus of the book is on business history, but it doesn’t shy away from the crucial technical details – the manufacturing processes and the tools that enable them, notably the development of extreme UV lithography and the rise of the Dutch company ASML. Excellent though the book is, its business focus did make me reflect that (as far as I’m aware) there’s a huge gap in the market for a popular science book explaining how these remarkable technologies all work – and perhaps speculating on what might come next.

Slouching to Utopia: an economic history of the 20th century, by Brad DeLong, is an elegy for a period of unparalleled technological advance and economic growth that seems, in the last decade, to have come to an end. For DeLong, it was the development of the industrial R&D laboratory towards the end of the 19th century that launched a long century, from 1870-2010, of unparalleled growth in material prosperity. The focus is on political economy, rather than the material and technological basis of growth (for the latter, Vaclav Smil’s pair of books Creating the Twentieth Century and Transforming the Twentieth Century are essential). But there is a welcome focus on the material substrate of information and communication technology rather than the more visible world of software (in contrast, for example, to Robert Gordon’s book The Rise and Fall of American Growth, which I reviewed rather critically here).

Though I am very sympathetic to many of the arguments in the book, ultimately it left me somewhat disappointed. Having rightly stressed the importance of industrial R&D as the driver of the technological change, this theme was not really strongly developed, with little discussion of the changing institutional landscape of innovation around the world. I also wish the book had a more rigorous editor – the prose lapses on occasion into self-indulgence and the book would have been better had it been a third shorter.

In contrast, Vaclav Smil’s latest book – How the World Really Works: A Scientist’s Guide to Our Past, Present and Future – clearly had an excellent editor. It’s a very compelling summary of a couple of decades of Smil’s prolific output. It’s not a boast about my own learning to say that I knew pretty much everything in this book before I read it; simply a consequence of having read so many of Smil’s previous, more academic books. The core of Smil’s argument is to stress, through quantification, how much we depend on fossil fuels, for energy, for food (through the Haber-Bosch process), and for the basic materials that underlie our world – ammonia, plastics, concrete and steel. These chapters are great, forceful, data-heavy and succinct, though the chapter on risk is less convincing.

Despite the editor, Smil’s own voice comes through strongly, sceptical, occasionally curmudgeonly, laying out the facts, but prone to occasional outbreaks of scathing judgement (he really dislikes SUVs!). Perhaps he overdoes the pessimism about the speed with which new technology can be introduced, but his message about the scale and the wrenching impact of the transition we need to go through, to move away from our fossil fuel economy, is a vital one.

From self-stratifying films to levelling up: A random walk through polymer physics and science policy

After more than two and a half years at the University of Manchester, last week I finally got round to giving an in-person inaugural lecture, which is now available to watch on Youtube. The abstract follows:

How could you make a paint-on solar cell? How could you propel a nanobot? Should the public worry about the world being consumed by “grey goo”, as portrayed by the most futuristic visions of nanotechnology? Is the highly unbalanced regional economy of the UK connected to the very uneven distribution of government R&D funding?

In this lecture I will attempt to draw together some themes both from my career as an experimental polymer physicist, and from my attempts to influence national science and innovation policy. From polymer physics, I’ll discuss the way phase separation in thin polymer films is affected by the presence of surfaces and interfaces, and how in some circumstances this can result in films that “self-stratify” – spontaneously separating into two layers, a favourable morphology for an organic solar cell. I’ll recall the public controversies around nanotechnology in the 2000s. There were some interesting scientific misconceptions underlying these debates, and addressing these suggested some new scientific directions, such as the discovery of new mechanisms for self-propelling nano- and micro- scale particles in fluids. Finally, I will cover some issues around the economics of innovation and the UK’s current problems of stagnant productivity and regional inequality, reflecting on my experience as a scientist attempting to influence national political debates.

Lessons from the gas price spike

On April 1st this year, the average UK household will see its annual energy bills rise from £1,277 to around £2,000 a year, according to the Resolution Foundation. After 10 years of stagnant wages – this itself a result of the ongoing productivity growth slowdown, there’s a clamour for some kind of short term fix for a potential political crisis, made worse by a forthcoming tax rise. Even more ominously, an unfolding geopolitical crisis over a conflict between Russia and Ukraine may interact with this energy crisis in a potentially far-reaching way, as we shall see.


UK gas and electricity spot prices (monthly rolling average of “day-ahead” prices). Data: OFGEM

My first plot shows the scale of the crisis. This shows the wholesale, spot prices of gas and electricity since 2010. I don’t want to dwell here on the dysfunctional features of the UK’s retail energy market that have led to the failure of a number of suppliers, or to look at the short-term issues that have exacerbated a current supply squeeze. Instead, it’s worth looking at the longer term implications for the UK’s energy security of this episode of market disruption, and to try to understand how we have been led to this state by global changes in energy markets and UK policy decisions over decades.

Natural gas matters existentially for the UK’s economy, because 40% of the UK’s demand for energy is met by gas, and without sufficient supplies of energy, a modern economy and society cannot function. The price of electricity is strongly coupled to the price of gas, because 34% of our electricity (in 2020) was generated in gas-fired power stations, compared to 15% from nuclear and 23% wind. But generating electricity only accounts for 29% of our total demand for gas. The biggest fraction – 37% – is used for heating our houses, with another 12% is directly burnt in industry, to make fertiliser, cement and in many other processes.

To understand why the wholesale price of gas matters so much, we need to understand a couple of ways in which the UK’s energy landscape has changed in the last twenty years. The first – the UK’s own balance between production and consumption – is shown in the next plot. Since 2004, the UK has gone from being self-sufficient in gas to being a substantial importer. Production of North Sea gas – like North Sea oil – peaked in the early 2000s, and has since rapidly dropped off, as the gas fields most easily and cheaply exploited have been exhausted.


Gas production and consumption in the UK. Data: Digest of UK Energy Statistics 2021, table 4.1.

The second consideration is the nature of the international gas market. A few decades ago, natural gas was a commodity that was used close to where it was produced – it could not be traded globally. But since then an infrastructure has been developed to transport natural gas over long distances; a network of intercontinental pipelines have been built, so gas produced, for example, in Arctic Siberia can be transported to markets in Western Europe. And the technology for shipping liquified natural gas in bulk has been developed, allowing gas from the huge fields in Qatar and Australia, and from the USA’s shale gas industry, to be taken to terminals across the world. This means that a worldwide gas market has been developed, tending to equalise prices across the world. A liquified natural gas tanker can leave Qatar, the USA or Australia and choose to take its cargo to wherever the price it can fetch is highest.

The combination of the UK’s dependency on gas imports means that the prices UK households and industry have to pay for energy reflect supply and demand on a global scale. My next plot shows how global demand has changed over the last couple of decades. The UK’s demand has held steady – the UK’s “dash for gas” represented an early energy transition from extensive use of coal to natural gas. This was a positive change that has reduced the UK’s emissions of greenhouse gases. Now other countries are following in the UK’s footsteps – again, a positive development for overall world greenhouse gas emissions, but putting huge upward pressure on gas supplies. This stresses that the UK is a minor player in world gas markets; its consumption accounts for about 2% of world demand.


World gas consumption by continent, together with China and UK. Data: US Energy Information Administration

Where is this gas coming from? The largest net exporter, as shown in my next plot, is Russia. There’s an ominous echo of the 1970’s and its linked energy, economic and political crises, as dominant energy suppliers realise that withholding energy exports can be a powerful weapon in geopolitical conflicts. As it happens, the UK’s gas imports come primarily from Norway, by pipeline, and Qatar, through LNG imports by ship. But this doesn’t mean that the UK won’t be affected if Russia chooses to exert pressure on Europe by throttling back gas exports. There’s a global market – if Russia cuts off supplies to Germany and Central Europe, Germany will seek to replace that by buying gas from Norway and on the world LNG market, and the prices the UK has to pay will rocket.


Top gas net exporters (i.e. exports less imports).Data: US Energy Information Agency

What should the UK do about this energy crisis?

We can discount straight away the suggestion made by veteran Thatcherite and Eurosceptic MP, Sir John Redwood, that the UK should simply produce more gas of its own. The UK is a small-scale participant in a global market. Even doubling its gas production would make no impact on the global balance of supply and demand, so prices would be unaffected. It’s true that if the gas was produced by a government-owned organisation, the rent – the difference between the market price and cost of production – would be captured by the UK state rather than having to be handed over to the governments of major exporters like Qatar, Norway and Russia. But British Gas was privatised in 1986.

The reason the UK ran down its production was that governments in the 1980’s made a conscious decision that energy should be left to the market, and the market said that it was cheaper to import gas than to produce it from the North Sea (and even more so than to develop a fracking industry in Sussex and the rural Pennines). One can’t help getting the impression that UK politicians like John Redwood are in revolt against the consequences of the national economic settlement that they themselves created.

In fact, there is nothing fundamental the UK can do now apart from strengthen the social safety net for the poorest households, accepting the pressure to increase taxes this leads to. Less politically visible, but nonetheless important, is the pressure high gas costs will put on energy-using industries. The reality is that, as a net importer of energy, higher gas prices inevitably lead to a real loss of national income. Energy infrastructures take many years to build, so all we can do now is look back at the things the UK should have done a decade ago, and learn from those mistakes so that we are in a better position a decade on from now.

What the UK should have done is to reduce the demand for gas through an aggressive pursuit of energy efficiency measures, and to increase the diversity of its energy sources by accelerating the development of other forms of (low-carbon) electricity generation. It failed on both fronts.

In 2013, the Coalition government reduced spending on energy efficiency measures as part of a campaign to “cut the green crap”; the result was a precipitous drop in measures such as cavity wall insulation and loft insulation. In 2015, the zero-carbon homes standard was scrapped, with the result that new housing was built to lower standards of energy efficiency. Recall that 37% of the UK’s gas demand is for domestic heating, so the UK’s poor standards of home energy efficiency translate directly into increased demand – and, with the current high prices, higher bills for consumers. “Cutting the green crap” turned out to be a costly mistake.

It is true that the UK has brought on-stream a significant amount of offshore wind capacity. However, too much of this capacity has been offset by the decline of the UK’s existing nuclear fleet, now approaching the end of its life. The UK government has committed to a programme of nuclear new build, but this programme has stalled. In 2013, I wrote that the nuclear new build programme was “too expensive, too late”, and everything that has happened since has born that diagnosis out.

There’s a more general lesson to learn from the current gas price spike. For some decades, the fundamental underpinning of the UK’s energy policy is that the market should be left to find the cheapest way of delivering the energy the nation needs. In the last decade, the government has intervened extensively in that market to promote one policy objective or another. We’ve seen contracts for difference, capacity markets, renewable obligation certificates – the purity of a free market has long since been left behind. But there’s still an underlying assumption that someone will be running a spreadsheet to calculate a net present value for any new energy investment.

Cost discipline does matter, but it’s important to recognise that these calculations, for investments that will be generating income for multiple decades, rest on projections of market conditions running many years in the future. But what this current episode should tell us is that the future course of energy markets is beset by what the economists call “Knightian uncertainty”. On the reliability of predictions of future energy prices, the lesson of the past, reinforced by what’s happening to gas prices now, is that no-one knows anything.

Energy can’t be left to the market, because the future state of the market is unknowable – but the need for energy is an inescapable ingredient of a modern economy and society. For something that is so important, building resilience into the system may be more important than maximising some notional net present value whose calculation depends on guesses about the state of the world over decades. This is even more true when we factor in the externalities imposed by the effect of fossil fuels on climate change, whose cost and impact remains so uncertain. To be more positive, there are uncertainties on the upside – the reductions in cost that an aggressive programme of low carbon research, development and deployment-driven innovation could bring. Rather than relying entirely on market forces, we have to design a resilient zero carbon energy system and get on with building it out.

Fighting Climate Change with Food Science

The false claim that US President Biden’s Climate Change Plan would lead to hamburger rationing has provided a predictably useful attack line for his opponents. But underlying this further manifestation of the polarisation of US politics, there is a real issue – producing the food we eat does produce substantial greenhouse gas emissions, and a disproportionate amount of these emissions come from eating the meat of ruminants like cattle and sheep.

According to a recent study, US emissions from the food system amount to 5 kg a person a day, and 47% of this comes from red meat. Halving the consumption of animal products by would reduce the USA’s greenhouse gas emissions by about 200 million tonnes of CO2 equivalent, a bit more than 3% of the total value. In the UK, the official Climate Change Committee recommends that red meat consumption should fall by 20% by 2050, as part of the trajectory towards net zero greenhouse gas emissions by 2050, with a 50% decrease necessary if progress isn’t fast enough in other areas. At the upper end of the range possibilities, a complete global adoption of completely animal-free – vegan – diets has been estimated to reduce total global greenhouse gas emissions by 14%.

The political reaction to the false story about Biden’s climate change plan illustrates why a global adoption of veganism isn’t likely to happen any time soon, whatever its climate and other advantages might be. But we should be trying to reduce meat consumption, and it’s worth asking whether the development of better meat substitutes might be part of the solution. We are already seeing “plant-based” burgers in the supermarkets and fast food outlets, while more futuristically there is excitement about using tissue culture techniques to produce in vitro, artificial or lab-grown meat. Is it possible that we can use technology to keep the pleasure of eating meat while avoiding its downsides?

I think that simulated meat has huge potential – but that this is more likely to come from the evolution of the currently relatively low-tech meat substitutes rather than the development of complex tissue engineering approaches to cultured meat [1]. As always, economics is going to determine the difference between what’s possible in principle and what is actually likely to happen. But I wonder whether relatively small investments in the food science of making meat substitutes could yield real dividends.

Why is eating meat important to people? It’s worth distinguishing three reasons. Firstly, meat does provide an excellent source of nutrients (though with potential adverse health effects if eaten to excess). Secondly, It’s a source of sensual pleasure, with a huge accumulated store of knowledge and technique about how to process and cook it to produce the most delicious results. Finally, eating meat is freighted with cultural, religious and historical significance. What kind of meat one’s community eats (or indeed, if it it eats meat at all), when families eat or don’t eat particular meats, all of these have deep historical roots. In many societies access to abundant meat is a potent signifier of prosperity and success, both at the personal and national level. It’s these factors that make calls for people to change their diets so political sensitive to this day.

So how is it realistic to imagine replacing meat with a synthetic substitute? The first issue is easy – replacing meat with foods of plant origin of equivalent nutritional quality is straightforward. The third issue is much harder – cultural change is difficult, and some obvious ways of eliminating meat run into cultural problems. A well-known vegetarian cookbook of my youth was called “Not just a load of old lentils” – this was a telling, but not entirely successful attempt to counteract an unhelpful stereotype head-on. So perhaps the focus should be on the second issue. If we can produce convincing simulations of meat that satisfy the sensual aspects and fit into the overall cultural preconceptions of what a “proper” meal looks like – in the USA or the UK, burger and fries, or a roast rib of beef – maybe we can meet the cultural issue halfway.

So what is meat, and how can we reproduce it? Lean meat consists of about 75% water, 20% protein and 3% fat. If it was just a question of reproducing the components, synthetic meat would be easy. An appropriate mixture of, say, wheat protein and pea protein (a mixture is needed to get all the necessary amino acids), some vegetable oil, and some trace minerals and vitamins, dispersed in water would provide all the nutrition that meat does. This would be fairly tasteless, of course – but given the well developed modern science of artificial flavours and aromas, we could fairly easily reproduce a convincing meaty broth.

But this, of course, misses out the vital importance of texture. Meat has a complex, hierarchical structure, and the experience of eating it reflects the way that structure is broken down in the mouth and the time profile of the flavours and textures it releases. Meat is made from animal muscle tissue, which develops to best serve what that particular muscle needs to do for the animal in its life. The cells in muscle are elongated to make fibres; the fibres bundle together to create the grain that’s familiar when we cut meat, but they also need to incorporate the connective tissue that allows the muscle to exert forces on the animal’s bones, and the blood-carrying vascular system that conveys oxygen and nutrients to the working muscle fibres. All of this influences the properties of the tissue when it becomes meat. The connective tissue is dominated by the protein material collagen, which consists of long molecules tightly bound together in triple helices.

Muscles that do a lot of work – like the lower leg muscles that make up the beef cuts known as shin or leg – have a lot of connective tissue. These cuts of meat are very tough, but after long cooking at low temperatures the collagen breaks down; the triple helices come apart, and the separated long molecules give a silky texture to the gravy, enhanced by the partial reformation of the helical junctions as it cools. In muscles that do less work – like the underside of the loin that forms the fillet in beef – there is much less connective tissue, and the meat is very tender even without long cooking.

High temperature grilling creates meaty flavours through a number of complex chemical reactions known as Maillard reactions, which are enhanced in the presence of carbohydrates in the flour and sugar that are used for barbecue marinades. Other flavours are fat soluble, carried in the fat cells characteristic of meat from well-fed animals that develop “marbling” of fat layers in the lean muscle. All of these characteristics are developed in the animal reflecting the life it leads before slaughter, and are developed further after butchering, storage and cooking.

In “cultured” meat, individual precursor cells derived from an animal are grown in a suitable medium, using a “scaffold” to help the cells organise to form something resembling natural muscle tissue. There a a couple of key technical issues with this. The first is the need to provide the right growth medium for the cells, to provide an energy source, other nutrients, and the growth factors that simulate the chemical communications between cells in whole organisms.

In the cell culture methods that have been developed for biomedical applications, the starting point for these growth media has been sera extracted from animal sources like cows. These are expensive – and obviously can’t produce an animal free product. Serum free growth media have been developed but are expensive, and optimising, scaling up and reducing the cost of these represent key barriers to be overcome to make “cultured meat” viable.

The second issue is reproducing the vasculature of real tissue, the network of capillaries that conveys nutrients to the cells. It’s this that makes it much easier to grow a thin layer of cells than to make a thick, steak-like piece. Hence current proofs of principle of cultured meat are more likely to produce mince meat for burgers rather than whole cuts.

I think there is a more fundamental problem in making the transition from cells, to tissue, to meat. One can make a three dimensional array of cells using a “scaffold” – a network of some kind of biopolymer that the cells can attach to and which guides their growth in the way that a surface does in a thin layer. But we know that the growth of cells is influenced strongly by the mechanical stimuli they are exposed to. This is obvious at the macroscopic scale – muscles that do more work, like leg muscles, grow in a different way that ones that do less – hence the difference between shin of beef and fillet steak. I find it difficult to see how, at scale, one could reproduce these effects in cell culture in a way that produces something that looks more like a textured piece of meat rather than a vaguely meaty mush.

I think there is a simpler approach, which builds on the existing plant-based substitutes for meat already available in the supermarket. Start with a careful study of the hierarchical structures of various meats, at scales from the micron to the millimetre, before and after cooking. Isolate the key factors in the structure that produce a particular hedonic response – e.g. the size and dispersion of the fat particles, and their physical state; the arrangement of protein fibres, the disposition of tougher fibres of connective tissue, the viscoelastic properties of the liquid matrix and so on. Simulate these structures using plant derived materials – proteins, fats, gels with different viscoelastic properties to simulate connective tissue, and appropriate liquid matrices, devising processing routes that use physical processes like gelation and phase separation to yield the right hierarchical structure in a scalable way. Incorporate synthetic flavours and aromas in controlled release systems localised in different parts of the structure. All this is a development and refinement of existing food technology.

At the moment, attempting something like this, we have start-ups like Impossible Burger and Beyond Meat, with new ideas and some distinct intellectual property. There are established food multinationals, like Unilever, moving in with their depth of experience in branding, distribution and deep food science expertise. We already have products, many of which are quite acceptable in the limited market niches they are aiming at (typically minced meat for burgers and sauces). We need to move now to higher value and more sophisticated products, closer to whole cuts of meat. To do this we need some more basic food science research, drawing on the wide academic base in the life sciences, and integrating this with the chemical engineering for making soft matter systems with complex heterogenous structures at scale, often by non-equilibrium self-assembly processes.

Food science is currently rather an unfashionable area, with little funding and few institutions focusing on it (for example, the UK’s former national Institute of Food Research in Norwich has pivoted away from classical food science to study the effect of the microbiome on human health). But I think the case for doing this is compelling. The strong recent rise in veganism and vegetarianism creates a large and growing market. But it does need public investment, because I don’t think intellectual property in this area will be very easy to defend. For this reason, large R&D investments by individual companies alone may be difficult to justify. Instead we need consortia bringing together multinationals like Unilever and players further downstream in the supply chain, like the manufacturers of ready meals and suppliers to fast food outlets, together with a relatively modest increase in public sector applied research. Food science may not be as glamorous as a new approach to nuclear fusion, but maybe turn out to be just as important in the fight against climate change.

[1]. See also this interesting article by Alex Smith and Saloni Shah – The Government Needs an Innovation Policy for Alternative Meats – which makes the case for an industrial strategy for alternative meats, but is more optimistic about the prospects for cell culture than I am.

Measuring up the UK Government’s ten-point plan for a green industrial revolution

Last week saw a major series of announcements from the government about how they intend to set the UK on the path to net zero greenhouse gas emissions. The plans were trailed in an article (£) by the Prime Minister in the Financial Times, with a full document published the next day – The ten point plan for a green industrial revolution. “We will use Britain’s powers of invention to repair the pandemic’s damage and fight climate change”, the PM says, framing the intervention as an innovation-driven industrial strategy for post-covid recovery. The proposals are patchy, insufficient by themselves – but we should still welcome them as beginning to recognise the scale of the challenge. There is a welcome understanding that decarbonising the power sector is not enough by itself. The importance of emissions from transport, industry and domestic heating are all recognised, and there is a nod to the potential for land-use changes to play a significant role. The new timescale for the phase-out of petrol and diesel cars is really significant, if it can be made to stick. So although I don’t think the measures yet go far enough or fast enough, one can start to see the outline of what a zero-emission economy might look like.

In outline, the emerging picture seems to be of a power sector dominated by offshore wind, with firm power provided either by nuclear or fossil fuels with carbon capture and storage. Large scale energy storage isn’t mentioned much, though possibly hydrogen could play a role there. Vehicles will predominantly be electrified, and hydrogen will have a role for hard to decarbonise industry, and possibly domestic heating. Some hope is attached to the prospect for more futuristic technologies, including fusion and direct air capture.

To move on to the ten points, we start with a reassertion of the Manifesto commitment to achieve 40 GW of offshore wind installed by 2030. How much is this? At a load factor of 40%, this would produce 140 TWh a year; for comparison, in 2019, we used a total 346 TWh of electricity. Even though this falls a long way short of what’s needed to decarbonise power, a build out of offshore wind on this scale will be demanding – it’s a more than four-fold increase on the 2019 capacity. We won’t be able to expand the capacity of offshore wind indefinitely using current technology – ultimately we will run out of suitable shallow water sites. For this reason, the announcement of a push for floating wind, with a 1 GW capacity target, is important.

On hydrogen, the government is clearly keen, with the PM saying “we will turn water into energy with up to £500m of investment in hydrogen”. Of course, even this government’s majority of 80 isn’t enough to repeal the laws of thermodynamics; hydrogen can only be an energy store or vector. As I’ve discussed in an earlier post (The role of hydrogen in reaching net zero), hydrogen could have an important role in a low carbon energy system, but one needs to be clear about how the hydrogen is made in a zero-carbon way, and how it is used, and this plan doesn’t yet provide that clarity.

The document suggests the first use will be in a natural gas blend for domestic heating, with a hint that it could be used in energy intensive industry clusters. The commitment is to create 5 GW of low carbon hydrogen production capacity by 2030. Is this a lot? Current hydrogen production amounts to 3 GW (27 TWh/year), used in industry and (especially) for making fertiliser, though none of this is low carbon hydrogen – it is made from natural gas by steam methane reforming. So this commitment could amount to building another steam reforming methane plant and capturing the carbon dioxide – this might be helpful for decarbonising industry, on on Deeside or Teeside perhaps. To give a sense of scale, total natural gas consumption in industry and homes (not counting electricity generation) equates to 58 GW (512 TWh/year), so this is no more than a pilot. In the longer term, making hydrogen by electrolysis and/or process heat from high temperature fission is more likely to be the scalable and cost-effective solution, and it is good that Sheffield’s excellent ITM Power gets a namecheck.

On nuclear power, the paper does lay out a strategy, but is light on the details of how this will be executed. For more detail on what I think has gone wrong with the UK’s nuclear strategy, and what I think should be done, see my earlier blogpost: Rebooting the UK’s nuclear new build programme. The plan here seems to be for one last heave on the UK’s troubled programme of large scale nuclear new build, followed up by a possible programme implementing a light water small modular reactor, with research on a new generation of small, high temperature, fourth generation reactors – advanced modular reactors (AMRs). There is a timeline – large-scale deployment of small modular reactors in the 2030’s, together with a demonstrator AMR around the same timescale. I think this would be realistic if there was a wholehearted push to make it happen, but all that is promised here is a research programme, at the level of £215 m for SMRs and £170m for AMRs, together with some money for developing the regulatory and supply chain aspects. This keeps the programme alive, but hardly supercharges it. The government must come up with the financial commitments needed to start building.

The most far-reaching announcement here is in the transport section – a ban on sales of new diesel and petrol car sales after 2030, with hybrids being permitted until 2035, after which only fully battery electric vehicles will be on sale. This is a big deal – a major effort will be required to create the charging infrastructure (£1.3 bn is ear-marked for this), and there will need to be potentially unpopular decisions on tax or road charging to replace the revenue from fuel tax. For heavy goods vehicles the suggestion is that we’ll have hydrogen vehicles, but all that is promised is R&D.

For public transport the solutions are fairly obvious – zero-emission buses, bikes and trains – but there is a frustrating lack of targets here. Sometimes old technologies are the best – there should be a commitment to electrify all inter-city and suburban lines as fast as feasible, rather than the rather vague statement that “we will further electrify regional and other rail routes”.

In transport, though, it’s aviation that is the most intractable problem. Three intercontinental trips a year can double an individual’s carbon footprint, but it is very difficult to see how one can do without the energy density of aviation fuel for long-distance flight. The solutions offered look pretty unconvincing to me – “we are investing £15 million into FlyZero – a 12-month study, delivered through the Aerospace Technology Institute (ATI), into the strategic, technical and commercial issues in designing and developing zero-emission aircraft that could enter service in 2030.” Maybe it will be possible to develop an electric aircraft for short-haul flights, but it seems to me that the only way of making long-distance flying zero-carbon is by making synthetic fuels from zero-carbon hydrogen and carbon dioxide from direct air capture.

It’s good to see the attention on the need for greener buildings, but here the government is hampered by indecision – will the future of domestic heating be hydrogen boilers or electric powered heat pumps? The strategy seems to be to back both horses. But arguably, even more important than the way buildings are heated is to make sure they are as energy-efficient as possible in the first place, and here the government needs to get a grip on the mess that is our current building regulation regime. As the Climate Change Committee says, “making a new home genuinely zero-carbon at the outset is around five times cheaper than retrofitting it later” – the housing people will be living in in 2050 is being built today, so there is no excuse for not ensuring the new houses we need now – not least in the neglected social housing sector – are built to the highest energy efficiency standards.

Carbon capture, usage and storage is the 8th of our 10 points, and there is a commendable willingness to accelerate this long-stalled programme. The goal here is “to capture 10Mt of carbon dioxide a year by 2030”, but without a great deal of clarity about what this is for. The suggestion that the clusters will be in the North East, the Humber, North West, and in Scotland and Wales suggests a goal of decarbonising energy intensive sectors, which in my view is the best use of this problematic technology (see my blogpost: Carbon Capture and Storage: technically possible, but politically and economically a bad idea). What’s the scale proposed here – is 10 Mt of carbon a year a lot or a little? Compared to the total CO2 emissions for the UK – 350 Mt in 2019 – it isn’t much, but on the other hand it is roughly in line with the total emissions of the iron and steel industry in the UK, so as an intervention to reduce the carbon intensity of heavy industry it looks more viable. The unresolved issue is who bears the cost.

There’s a nod to the effects of land-use changes, in the section on protecting the natural environment. There are potentially large gains to be had here in projects to reforest uplands and restore degraded peatlands, but the scale of ambition is relatively small.

Finally, the tenth point concerns innovation, with the promise of a “£1 billion Net Zero Innovation Portfolio” as part of the government’s aspiration to raise the UK’s R&D intensity to 2.4% of GDP by 2027. The R&D is to support the goals in the 10 point plan, with a couple of more futuristic bets – on direct air capture, and on commercial fusion power through the Spherical Tokomak for Energy Production project.

I think R&D and innovation are enormously important in the move to net zero. We urgently need to develop zero-carbon technologies to make them cheaper and deployable at scale. My own somewhat gloomy view (see this post for more on this: The climate crisis now comes down to raw power) is that, taking a global view incorporating the entirely reasonable aspiration of the majority of the world’s population to enjoy the same high energy lifestyle that is to be found in the developed world, the only way we will effect a transition to a zero-carbon economy across the world is if the zero-carbon technologies are cheaper – without subsidies – than fossil fuel energy. If those cheap, zero-carbon technologies can be developed in the UK, that will make a bigger difference to global carbon budgets than any unilateral action that affects the UK alone.

But there is an important counter-view, expressed cogently by David Edgerton in a recent article: Cummings has left behind a No 10 deluded that Britain could be the next Silicon Valley. Edgerton describes a collective credulity in the government about Britain’s place in the world of innovation, which overstates the UK’s ability to develop these new technologies, and underestimates the degree to which the UK will be dependent on innovations developed elsewhere.

Edgerton is right, of course – the UK’s political and commentating classes have failed to take on board the degree to which the country has, since the 1980’s, run down its innovation capacity, particularly in industrial and applied R&D. In energy R&D, according to recent IEA figures, the UK spends about $1.335 billion a year – some 4.3% of the world total, eclipsed by the contributions of the USA, China, the EU and Japan.

Nonetheless, $1.3 billion is not nothing, and in my opinion this figure ought to increase substantially both in absolute terms, and as a fraction of rising public investment in R&D. But the UK will need to focus its efforts in those areas where it has unique advantages; while in other areas international collaboration may be a better way forward.

Where are those areas of unique advantage? One such probably is offshore wind, where the UK’s Atlantic location gives it a lot of sea and a lot of wind. The UK currently accounts for about 1/3 of all offshore wind capacity, so it represents a major market. Unfortunately, the UK has allowed the situation to develop where the prime providers of its offshore wind technology are overseas. The plan suggests more stringent targets for local content, and this does make sense, while there is a strong argument that UK industrial strategy should try and ensure that more of the value of the new technologies of deepwater floating wind are captured in the UK.

While offshore wind is being deployed at scale right now, fusion remains speculative and futuristic. The government’s strategy is to “double down on our ambition to be the first country in the world to commercialise fusion energy technology”. While I think the barriers to developing commercial fusion power – largely in materials science – remain huge, I do believe the UK should continue to fund it, for a number of reasons. Firstly, there is a possibility that it might actually work, in which case it would be transformative – it’s a long odds bet with a big potential payoff. But why should the UK be the country making the bet? My answer would be that, in this field, the UK is genuinely internationally competitive; it hosts the Joint European Torus, and the sponsoring organisation UKAEA retains, rare in UK, capacity for very complex engineering at scale. Even if fusion doesn’t deliver commercial power, the technological spillovers may well be substantial.

The situation in nuclear fission is different. The UK dramatically ran down its research capacity in civil nuclear power, and chose instead to develop a new nuclear build programme on the basis of entirely imported technology. This was initially the French EPR currently being built in Hinkley Point, with another another type of pressurised water reactor, from Toshiba, to be built in Cumbria, and a third type of reactor, a boiling water reactor from Hitachi, in Anglesea. That hasn’t worked out so well, with only the EPRs now looking likely to be built. The current strategy envisages a reset, with a new programme of light water small modular reactors – that is to say, a technologically conservative PWR designed with an emphasis on driving its capital cost down, followed by work on a next generation fission reactor. These “advanced modular reactors” would be relatively small high temperature reactor. The logic for the UK to be the country to develop this technology is that it is only country that has run an extensive programme of gas cooled reactors, but it still probably needs collaboration with other like-minded countries.

How much emphasis should the UK put into developing electric vehicles, as opposed to simply creating the infrastructure for them and importing the technology? The automotive sector still remains an important source of added value for the UK, having made an impressive recovery from its doldrums in the 90’s and 00’s. Jaguar Land Rover, though owned by the Indian conglomerate Tata, is still essentially a UK based company, and it has an ambitious development programme for electric vehicles. But even with its R&D budget of £1.8 bn a year, it is a relative minnow by world standards (Volkswagen’s R&D budget is €13bn, and Toyota’s only a little less); for this reason it is developing a partnership with BMW. The government should support the UK industry’s drive to electrify, but care will be needed to identify where UK industry can find the most value in global supply chains.

A “green industrial strategy” is often sold on the basis of the new jobs it will create. It will indeed create more jobs, but this is not necessarily a good thing. If it takes more people, more capital, more money to produce the same level of energy services – houses being heated, iron being smelted, miles driven in cars and lorries – then that amounts to a loss of productivity across the economy as a whole. Of course this is justified by the huge costs that burning fossil fuels impose on the world as a whole through climate change, costs which are currently not properly accounted for. But we shouldn’t delude ourselves. We use fossil fuels because they are cheap, convenient, and easy to use, and we will miss them – unless we can develop new technologies that supply the same energy services at a lower cost, and that will take innovation. New low carbon energy technologies need to be developed, and existing technologies made cheaper and more effective.

To sum up, the ten point plan is a useful step forward, The contours of a zero-emissions future are starting to emerge, and it is very welcome that the government has overcome its aversion to industrial strategy. But more commitment and more realism is required.

The challenge of deep decarbonisation

This is roughly the talk I gave in the neighbouring village of Grindleford about a month ago, as part of a well-attended community event organised by Grindleford Climate Action.

Thanks so much for inviting me to talk to you today. It’s great to see such an impressive degree of community engagement with what is perhaps the defining issue we face today – climate change. What I want to talk about today is the big picture of what we need to do to tackle the climate change crisis.

The title of this event is “Without Hot Air” – I know this is inspired by the great book “Sustainable Energy without the Hot Air”, by the late David McKay. David was a physicist at the University of Cambridge; he wrote this book – which is free to download – because of his frustration with the way the climate debate was being conducted. He became Chief Scientific Advisor to the Department of Energy and Climate Change in the last Labour government, but died, tragically young at 49, in 2016.

His book is about how to make the sums add up. “Everyone says getting off fossil fuels is important”, he says, “and we’re all encouraged to ‘make a difference’, but many of the things that allegedly make a difference don’t add up.“

It’s a book about being serious about climate change, putting into numbers the scale of the problem. As he says “if everyone does a little, we’ll achieve only a little.”

But to tackle climate change we’re going to need to do a lot. As individuals, we’re going to need to change the way we live. But we’re going to need to do a lot collectively too, in our communities, but also nationally – and internationally – through government action.

Net zero greenhouse gas emission by 2050?

The Government has enshrined a goal of achieving net zero greenhouse gas emissions by 2050 in legislation. This is a very good idea – it’s a better target than a notional limit on the global temperature rise, because it’s the level of greenhouse gas emissions that we have direct control over.

But there are a couple of problems.

We’ve emitted a lot of greenhouse gases already, and even if we – we being the whole world here – reach the 2050 target, we’ll have emitted a lot more. So the target doesn’t stop climate change, it just limits it – perhaps to 1.5 – 2° of warming or so.

Even worse, the government just isn’t being serious about doing what would need to be done to reach the target. The trouble is that 2050 sounds a long way off for politicians who think in terms of 5 year election cycles – or, indeed, at the moment, just getting through the next week or two. But it’s not long in terms of rebuilding our economy and society.

Just think how different is the world now to the world in 1990. In terms of the infrastructure of everyday life – the buildings, the railways, the roads – the answer is, not very. I’m not quite driving the same car, but the trains on the Hope Valley Line are the same ones – and they were obsolete then! Most importantly, our energy system is still dominated by hydrocarbons.

I think on current trajectory there is very little chance of achieving net zero greenhouse gas emissions by 2050 – so we’re heading for 3 or 4 degrees of warming, a truly alarming and dangerous prospect. Continue reading “The challenge of deep decarbonisation”

Carbon Capture and Storage: technically possible, but politically and economically a bad idea

It’s excellent news that the UK government has accepted the Climate Change Committee’s recommendation to legislate for a goal of achieving net zero greenhouse emissions by 2050. As always, though, it’s not enough to will the end without attending to the means. My earlier blogpost stressed how hard this goal is going to be to reach in practise. The Climate Change Committee does provide scenarios for achieving net zero, and the bad news is that the central 2050 scenario relies to a huge extent on carbon capture and storage. In other words, it assumes that we will still be burning fossil fuels, but we will be mitigating the effect of this continued dependence on fossil fuels by capturing the carbon dioxide released when gas is burnt and storing it, into the indefinite future, underground. Some use of carbon capture and storage is probably inevitable, but in my view such large-scale reliance on it is, politically and economically, a bad idea.

In the central 2050 net zero scenario, 645 TWh of electricity is generated a year – more than doubled from 2017 value of 300 TWh, reflecting the electrification of sectors like transport. The basic strategy for deep decarbonisation has to be, as a first approximation, to electrify everything, while simultaneously decarbonising power generation: so far, so good.

But even with aggressive expansion of renewable electricity, this scenario still calls for 150 TWh to be generated from fossil fuels, in the form of gas power stations. To achieve zero carbon emissions from this fossil fuel powered electricity generation, the carbon dioxide released when the gas is burnt has to be captured at the power stations and pumped through a specially built infrastructure of pipes to disused gas fields in the North Sea, where it is injected underground for indefinite storage. This is certainly technically feasible – to produce 150 TWh of electricity from gas, around 176 million tonnes of carbon dioxide a year will be produced. For comparison currently about 42 million tonnes of natural gas a year is taken out of the North Sea reservoirs, so reversing the process at four times the scale is undoubtedly doable.

In fact, more carbon capture and storage will be needed than the 176 million tonnes from the power sector, because the zero net greenhouse gas plan relies on it in four distinct ways. In addition to allowing us to carry on burning gas to make electricity, the plan envisages capturing carbon dioxide from biomass-fired power stations too. This should lead to a net lowering of the amount of carbon dioxide in the atmosphere, amounting to a so-called “negative emissions technology”. The idea of these is one offsets the remaining positive carbon emissions from hard to decarbonise sectors like aviation with these “negative emissions” to achieve overall net zero emissions.

Meanwhile the plan envisages the large scale conversion of natural gas to hydrogen, to replace natural gas in industry and domestic heating. One molecule of methane produces two molecules of hydrogen, which can be burnt in domestic boilers without carbon emissions, and one of carbon dioxide, which needs to be captured at the hydrogen plant and pumped away to the North Sea reservoirs. Finally some carbon dioxide producing industrial processes will remain – steel making and cement production – and carbon capture and storage will be needed to render these processes zero carbon. These latter uses are probably inevitable.

But I want to focus on the principal envisaged use of carbon capture and storage – as a way of avoiding the need to move to entirely low carbon electricity, i.e. through renewables like wind and solar, and through nuclear power. We need to take a global perspective – if the UK achieves net zero greenhouse gas status by 2050, but the rest of the world carries on as normal, that helps no-one.

In my opinion, the only way we can be sure that the whole world will decarbonise is if low carbon energy – primarily wind, solar and nuclear – comes in at a lower cost than fossil fuels, without subsidies or other intervention. The cost of these technologies will surely come down: for this to happen, we need both to deploy them in their current form, and to do research and development to improve them. We need both the “learning by doing” that comes from implementation, and the cost reductions that will come from R&D, whether that’s making incremental process improvements to the technologies as they currently stand, or developing radically new and better versions of these technologies.

But we will never achieve these technological improvements and corresponding cost reductions for carbon capture and storage.

It’s always tempting fate to say “never” for the potential for new technologies – but there’s one exception, and that’s when a putative new technology would need to break one of the laws of thermodynamics. No-one has ever come out ahead betting against these.

To do carbon capture and storage will always need additional expenditure over and above the cost of an unabated gas power station. It needs both:

  • up-front capital costs for the plant to separate the carbon dioxide in the first place, infrastructure to pipe the carbon dioxide long distances and pump it underground,
  • lowered conversion efficiencies and higher running costs – i.e. more gas needs to be burnt to produce a given unit of electricity.
  • The latter is an inescapable consequence of the second law of thermodynamics – carbon capture will always need a separation step. Either one needs to take air and separate it into its component parts, taking out the pure oxygen, so one burns gas to produce a pure waste stream consisting of carbon dioxide and water. Or one has to take the exhaust from burning the gas in air, and pull out the carbon dioxide from the waste. Either way, you need to take a mixed gas and separate its components – and that always takes an energy input to drive the loss of entropy that follows from separating a mixture.

    The key point, then, is that no matter how much better our technology gets, power produced by a gas power station with carbon capture and storage will always be more expensive that power from unabated gas. The capital cost of the plant will be greater, and so will the revenue cost per kWh. No amount of technological progress can ever change this.

    So there can only be a business case for carbon capture and storage through significant government interventions in the market, either through a subsidy, or through a carbon tax. Politically, this is an inherently unstable situation. Even after the capital cost of the carbon capture infrastructure has been written off, at any time the plant operator will be able to generate electricity more cheaply by releasing the carbon dioxide produced when the gas is burnt. Taking an international perspective, this leads to a massive free rider problem. Any country will be able to gain a competitive advantage at any time by turning the carbon capture off – there needs to be a fully enforced international agreement to impose carbon taxes at a high enough level to make the economics work. I’m not confident that such an agreement – which would have to cover every country making a significant contribution to carbon emissions to be effective – can be relied to hold on the scale of many decades.

    I do accept that some carbon and capture and storage probably is essential, to capture emissions from cement and steel production. But carbon capture and storage from the power sector is a climate change solution for a world that does not exist any more – a world of multilateral agreements and transnational economic rationality. Any scenario that relies on carbon capture and storage is just a politically very risky way of persuading ourselves that fossil-fuelled business as usual is sustainable, and postponing the necessary large scale implementation and improvement through R&D of genuine low carbon energy technologies – renewables like wind and solar, and nuclear.

    The climate crisis now comes down to raw power

    Fifteen years ago it was possible to be optimistic about the world’s capacity to avert the worst effects of climate change. The transition to low carbon energy was clearly going to be challenging and it probably wasn’t going to be fast enough. But it did seem to be going with the grain of the evolution of the world’s energy economy, in this sense: oil prices seemed to be on an upward trajectory, squeezed between the increasingly constrained supplies predicted by “peak oil” theories, and the seemingly endless demand driven by fast developing countries like China and India. If fossil fuels were on a one-way upward trajectory in price and availability, then renewable energy would inevitably take its place – subsidies might bring forward their deployment, but the ultimate destination of a decarbonised energy system was assured.

    The picture looks very different today. Oil prices collapsed in the wake of the global financial crisis, and after a short recovery have now fallen below the most pessimistic predictions of a decade ago. This is illustrated in my plot, which shows the long-term evolution of real oil prices. As Vaclav Smil has frequently stressed, long range forecasting of energy trends is a mugs game, and this is well-illustrated in my plot, which shows successive decadal predictions of oil prices by the USA’s Energy Information Agency.


    Successive predictions for future oil prices made by the USA’s EIA in 2000 and 2010, compared to the actual outcome up to 2016.

    What underlies this fall in oil prices? On the demand side, this partly reflects slower global economic growth than expected. But the biggest factor has been a shock on the supply side – the technological revolution behind fracking and the large-scale exploitation of tight oil, which has pushed the USA ahead of Saudi Arabia as the world’s largest producer of oil. The natural gas supply situation has been transformed, too, through a combination of fracking in the USA and the development of a long-distance market in LNG from giant reservoirs in places like Qatar and Iran. Since 1997, world gas consumption has increased by 25% – but the size of proven reserves has increased by 50%. The uncomfortable truth is that we live in a world awash with cheap hydrocarbons. As things now stand, economics will not drive a transition to low carbon energy.

    A transition to low carbon energy will, as things currently stand, cost money. Economist Jean Pisani-Ferry puts this very clearly in a recent article“let’s be clear: the green transition will not be a free lunch … we’ll be putting a price on something that previously we’ve enjoyed for free”. Of course, this reflects failings of the market economy that economics already understands. If we heat our houses by burning cheap natural gas rather than installing an expensive ground-source heat pump and running that off electricity from offshore wind, we get the benefit of saving money, but we impose the costs of the climate change we contribute to on someone else entirely (perhaps the Bangladesh delta dweller whose village gets flooded). And if we are moved to pay out for the low-carbon option, the sense of satisfaction our ethical superiority over our gas-guzzling neighbours gives us might be tempered by resentment of their greater disposable income.

    The problems of uncosted externalities and free riders are well known to economists. But just because the problems are understood, it doesn’t mean they have easy solutions. The economist’s favoured remedy is a carbon tax, which puts a price on the previously uncosted deleterious effects of carbon emissions on the climate, but leaves the question of how best to mitigate the emissions to the market. It’s an elegant and attractive solution, but it suffers from two big problems.

    The first is that, while it’s easy to state that emitting carbon imposes costs on the rest of the world, it’s very difficult to quantify what those costs are. The effects of climate change are uncertain, and are spread far into the future. We can run a model which will give us a best estimate of what those costs might be, but how much weight should we give to tail risk – the possibility that climate change leads to less likely, but truly catastrophic outcomes? What discount rate – if any – should we use, to account for the fact that we value things now more than things in the future?

    The second is that we don’t have a world authority than can impose a single tax uniformly. Carbon emissions are a global problem, but taxes will be imposed by individual nations, and given the huge and inescapable uncertainty about what the fair level of a carbon tax would be, it’s inevitable than countries will impose carbon taxes at the low end of the range, so they don’t disadvantage their own industries and their own consumers. This will lead to big distortions of global trade, as countries attempt to combat “carbon leakage”, where goods made in countries with lower carbon taxes undercut goods which more fairly price the carbon emitted in their production.

    The biggest problems, though, will be political. We’ve already seen, in the “Gilets Jaune” protests in France, how raising fuel prices can be a spark for disruptive political protest. Populist, authoritarian movements like those led by Trump in the USA are associated with enthusiasm for fossil fuels like coal and oil and a downplaying or denial of the reality of the link between climate change and carbon emissions. To state the obvious, there are very rich and powerful entities that benefit enormously from the continued production and consumption of fossil fuels, whether those are nations, like Saudi Arabia, Australia, the USA and Russia, or companies like ExxonMobil, Rosneft and Saudi Aramco (the latter two, as state owned enterprises, blurring the line between the nations and the companies).

    These entities, and those (many) individuals who benefit from them, are the enemies of climate action, and oppose, from a very powerful position of incumbency, actions that lesson our dependence on fossil fuels. How do these groups square this position with the consensus that climate change driven by carbon emissions is serious and imminent? Here, again, I think the situation has changed since 10 or 15 years or so ago. Then, I think many climate sceptics did genuinely believe that anthropogenic climate change was unimportant or non-existent. The science was complicated, it was plausible to find a global warming hiatus in the data, the modelling was uncertain – with the help of a little motivated reasoning and confirmation bias, a sceptical position could be reached in good faith.

    I think this is much less true now, with the warming hiatus well and truly over. What I now suspect and fear is that the promoters of and apologists for continued fossil fuel burning know well that we’re heading towards a serious mid-century climate emergency, but they are confident that, from a position of power, their sort will be able to get through the emergency. With enough money, coercive power, and access to ample fossil fuel energy, they can be confident that it will be others that suffer. Bangladesh may disappear under the floodwaters and displace millions, but rebuilding Palm Beach won’t be a problem.

    We now seem to be in a world, not of peak oil, but of a continuing abundance of fossil fuels. In these circumstances, perhaps it is wrong to think that economics can solve the problem of climate change. It is a now a matter of raw power.

    Is there an alternative to this bleak conclusion? For many the solution is innovation. This is indeed our best hope – but it is not sufficient simply to incant the word. Nor is the recent focus in research policy on “grand challenges” and “missions” by itself enough to provide an implementation route for the major upheaval in the way our societies are organised that a transition to zero-carbon energy entails. For that, developing new technology will certainly be important, and we’ll need to understand how to make the economics of innovation work for us, but we can’t be naive about how new technologies, economics and political power are entwined.