The challenge of deep decarbonisation

This is roughly the talk I gave in the neighbouring village of Grindleford about a month ago, as part of a well-attended community event organised by Grindleford Climate Action.

Thanks so much for inviting me to talk to you today. It’s great to see such an impressive degree of community engagement with what is perhaps the defining issue we face today – climate change. What I want to talk about today is the big picture of what we need to do to tackle the climate change crisis.

The title of this event is “Without Hot Air” – I know this is inspired by the great book “Sustainable Energy without the Hot Air”, by the late David McKay. David was a physicist at the University of Cambridge; he wrote this book – which is free to download – because of his frustration with the way the climate debate was being conducted. He became Chief Scientific Advisor to the Department of Energy and Climate Change in the last Labour government, but died, tragically young at 49, in 2016.

His book is about how to make the sums add up. “Everyone says getting off fossil fuels is important”, he says, “and we’re all encouraged to ‘make a difference’, but many of the things that allegedly make a difference don’t add up.“

It’s a book about being serious about climate change, putting into numbers the scale of the problem. As he says “if everyone does a little, we’ll achieve only a little.”

But to tackle climate change we’re going to need to do a lot. As individuals, we’re going to need to change the way we live. But we’re going to need to do a lot collectively too, in our communities, but also nationally – and internationally – through government action.

Net zero greenhouse gas emission by 2050?

The Government has enshrined a goal of achieving net zero greenhouse gas emissions by 2050 in legislation. This is a very good idea – it’s a better target than a notional limit on the global temperature rise, because it’s the level of greenhouse gas emissions that we have direct control over.

But there are a couple of problems.

We’ve emitted a lot of greenhouse gases already, and even if we – we being the whole world here – reach the 2050 target, we’ll have emitted a lot more. So the target doesn’t stop climate change, it just limits it – perhaps to 1.5 – 2° of warming or so.

Even worse, the government just isn’t being serious about doing what would need to be done to reach the target. The trouble is that 2050 sounds a long way off for politicians who think in terms of 5 year election cycles – or, indeed, at the moment, just getting through the next week or two. But it’s not long in terms of rebuilding our economy and society.

Just think how different is the world now to the world in 1990. In terms of the infrastructure of everyday life – the buildings, the railways, the roads – the answer is, not very. I’m not quite driving the same car, but the trains on the Hope Valley Line are the same ones – and they were obsolete then! Most importantly, our energy system is still dominated by hydrocarbons.

I think on current trajectory there is very little chance of achieving net zero greenhouse gas emissions by 2050 – so we’re heading for 3 or 4 degrees of warming, a truly alarming and dangerous prospect.

What are these greenhouse gases, and where do they come from? The two key gases are carbon dioxide and methane, in that order of importance. The biggest (but not the only) source of carbon dioxide is our energy system.

It is difficult to overstate the degree to which our civilization depends on cheap and abundant energy; that energy still, predominantly, comes from burning fossil fuels, producing carbon dioxide.

What’s our personal energy budget?

Let me talk now about some numbers. I know that energy units can be confusing, with talk of terawatt hours, exajoules, and millions of tonnes of oil equivalent. So I’m going to follow the example of David MacKay, and use a single unit, the kilowatt hour, and I’m going to express our energy use in terms of the average amount of energy used in a day by a single person, assuming that the UK’s energy consumption is shared out equally amongst the population.

So I’m going to talk in terms of kilowatt hours (kWh) per day per person. This is the unit our electricity bills come in, so we can express it also in currency. 1 kWh of electricity costs about 12 – 16 pence (depending on what complicated and misleading tariff you have), 1 kWh of gas costs about 4 p. A litre of petrol contains about 10 kWh worth of energy, so energy in the form of petrol costs about 13 p a kWh.

What’s our share of the UK’s energy use? On average, it’s about 90 kWh per person per day [1]. And where does that come from? The problem is that a bit more than 80% of that comes from burning fossil fuels, directly releasing carbon dioxide into the atmosphere. In fact, 55 kWh is directly burnt oil and gas – gas for heating and hot water, oil for petrol and diesel in cars and trucks. And yet more gas is burnt to generate electricity.

This is a really important point. We sometimes hear the government boasting that we’re achieved a target of 50% low carbon power, and on windy autumn days we sometimes reach a third of our power coming from wind.

But we must remember that here we’re talking about electricity, and the electricity we use is a relatively minor fraction of the total energy we use.

Roughly speaking, most of the energy we use can be divided into three boxes: fuel for driving around, gas for heating our houses, and electricity for all our appliances and gadgets – and our share of the UK’s commercial and industrial processes.

So out of the 90 kWh per person per day total, 35 kWh is used in petrol and diesel for cars and trucks, 21 kWh for gas to heat houses and hot water, and 30 kWh for the inputs into power stations to produce the 14 kWh of electricity we use.

Is that it?

Alas, no. There are two other big things that are left out of the national statistics.

Firstly, there is aviation. This accounts for about 7% of our GHG emissions as a nation. But it attracts a lot of attention, for a couple of reasons. It’s growing fast, it’s very difficult to see how to decarbonize it, and its distributed very unevenly. Most people don’t fly at all, but a few people fly a lot. So for a well-off individual, it’s easy for it to be the dominant contribution to someone’s carbon footprint.

One intercontinental trip a year can add 30 kWh/day to your total energy consumption; a winter trip to Malaga will add 6 kWh/day. It’s the distance you go that matters.

Secondly, there’s the stuff we buy. It costs energy – sometimes a lot of energy – to make stuff, so we ought to count the energy used to fuel our material consumption in the totals.

The figures I quote above do include UK industry – the problem is that we import much more stuff than we export. In effect we’ve effectively offshored much of our greenhouse gas production.

Materials like steel, cement and aluminium depend on the use of fossil fuel derived energy; so does the food we eat.

The cement works is a big landmark in the Hope Valley, and it’s a major producer of greenhouse gases. Making cement has a double impact – heat from fossil fuels is needed to power the process of making the stuff, but the chemistry of that process also produces carbon dioxide directly. The 1.5 million tonnes of cement a year the plant makes leads to the release of about 1.35 million tonnes of carbon dioxide – that’s about 0.4% of the total emissions for the whole UK!

Staple foods like wheat and rice depend on artificial fertilizer produced from fossil fuel energy by the Haber-Bosch process – on some estimates, without this artificial fertilizer, between a third and half of the world’s population would starve.

Our electronic devices – our laptops and smartphones – are particularly energy intensive to produce. But there’s another, even less well appreciated way in which our digital lifestyle has a big carbon footprint – the servers which power the “cloud”, where all those cat videos live, consume a huge and fast increasing amount of energy. Worldwide, the production and use of digital devices like phones and computers is estimated to have a carbon footprint twice the size of aviation, and this is growing twice as fast.

How much does this all this additional indirect consumption of energy – energy embedded in the stuff we import – add up to? Adding this up is quite difficult, as we don’t know for sure how energy intensive the industries all across the world producing this stuff are.

David MacKay in his book makes a bottom up estimate of the embedded energy in stuff, which comes in at about 40 kWh/day per person. This is possibly a little low – some more recent estimates put it more like 50 kWh/day per person. What we can be certain of is that this embedded energy, in the imported stuff we buy and use, is a really significant fraction of the total energy we use. The UK’s dependence on imported goods amounts to the export of a substantial part of its carbon footprint – but we can’t escape responsibility for the greenhouse gas emissions this leads to.

What can we do about reducing this?

So in round numbers, each individual’s share of the UK’s energy consumption is 90 kWh per day for direct energy use – and 80% of this derives from fossil fuels, and causes carbon emissions. Another 50 kWh per person per day is embodied in all the stuff we buy that’s imported from abroad, where it causes yet more carbon emissions. And if we take the occasional flight that’s going to bump up our averages by a further few 10’s of kWh per day, depending on how far and how often we fly.

What can we do to reduce this? Firstly, we should reduce our consumption of stuff – buy less, mend more, be careful about the use of our electronic devices, switch our consumption to less carbon intensive products – including eating less meat and dairy.

But there’s no escaping that the industrial society we live in makes it difficult to use drastically less energy – we need to get around, we need to heat our houses, we do need agriculture and industry to feed us and provide the infrastructure we rely on.

We can reduce demand, by making our houses more energy efficient, by using less gas-guzzling cars. But to make a big impact, I think we need to follow the recipe David MacKay recommended – electrify everything, and decarbonise our electricity supply.

We’re beginning to see electric cars, for example – we now need use of these to increase by a factor of a thousand or so. For this to happen, we’re going to need to see big falls in costs – mostly the cost of batteries. And we’ll need to create a whole new charging infrastructure. And if we use more electricity for cars, that electricity will have to be generated – perhaps adding another 10 kWh per person per day to the 14 kWh we use now, if all petrol and diesel cars are taken off the road.

Domestic heating is going to be difficult, because so much of our housing stock is old and retrofitting is difficult. What we need here is a massive building programme of new housing – maybe focusing on social housing, which we need anyway, making sure it is built to the right, highly energy efficient standards.

As we switch from burning fossil fuels to using electricity, even more electricity will need to be generated. Including what we need for electric cars, and replacing most industrial and domestic burning of natural gas [2], and even accounting for improvements in energy efficiency, we could easily be talking about doubling electricity generation or more, to 30-40 kWh per person per day or so [3].

How’s our progress towards generating that much low carbon electricity? The last decade has seen a really impressive expansion of solar energy and wind power – especially offshore wind. Yet wind and solar now still only account for 2.5 kWh/person/day.

How much more can we hope for? Optimistic, but achievable, scenarios for offshore wind envisage 5 kWh per person per day by 2030, and tripling solar from 0.5 kWh per person per day to 1.5 kWh.

The biggest source of low-carbon electricity now is nuclear, which supplies 3 kWh per person per day. Most of this capacity will go offline in the next decade as the 1970’s generation Advanced Gas-cooled Reactors reach the end of their operating lives. This loss of low carbon generation capacity will pretty much wipe out of all the gains from wind and solar up to now.

I know this is going to be an unpopular view with this audience, but I don’t think we can decarbonise our energy supply on the time-scale we need without nuclear power – and that means building more nuclear power stations, as well as keeping the ones we have got going as long as possible. Although I think nuclear power is an ugly technology with plenty of disadvantages [4], I don’t think we can do without it given the climate emergency we face. The gap between the low carbon electricity we need – perhaps 25-30 kWh per person per day – and what wind and solar are likely to able to provide is just too big – 6-7 kWh per person per day is a reasonable guess of what could be in place by 2030 [5].

Too many people think that offshore wind and solar are an alternative to nuclear. It’s not a question of either one or the other, in my view: we need every source of low-carbon power we can find [6]. A massive buildout of low carbon generation is needed right now. It’s not happening with anything like enough urgency.

Non energy greenhouse gas emissions

I’ve focused so far on our use of energy, which is the biggest source of our greenhouse gas emissions. But there are other sources, and of these the biggest is in the way we use land, which in the UK accounts for 11% of total emissions. We can make a significant difference by changing the way land is used [7].

There’s a great and positive example here in the Peak District, that my fellow speaker tonight, Peak District National Park Authority Chair Andrew McCloy, has already mentioned. The peat uplands of the Peak – on Kinder Scout and Bleaklow, for example, are really significant sinks of carbon when they’re working well – but over the decades they’ve become degraded, to the extent that they’re actually releasing carbon dioxide, most visibly in the wildfires we’ve seen in recent summers.

Moors for the Future is a great project that is already making a material difference. I remember the Kinder Scout of my youth – or even a decade or two ago – as a black wasteland of fast-eroding peat. But I was up there a couple of weeks ago, and the work that’s been done damming up the groughs and encouraging new vegetation growth is already starting to transform the plateau.

We should certainly let more of our uplands revert to tree cover, hastening that by planting projects. It’s tempting to associate this with “carbon offsetting” – trying to atone for the carbon emissions of flying by planting more trees. I’m a bit sceptical about this – planting trees shouldn’t be a substitute for reducting other emissions – it’s necessary in itself to stop things getting worse.

So, are we going to reach net zero greenhouse gas emissions by 2050?

No, not the way we are going. All the scenarios that suggest we can rely on “negative emissions technologies” which currently remain at the concept stage. We have to get serious about this.

We have got to change the way we live, and we’ve got to build out a new low carbon infrastructure – including power generation, transport systems, low carbon housing. We have to start right now, but it’s going to be a project for the decades ahead.

What can individuals do and what can communities do, in the face of a challenge on this scale? We need a combination of action on the basis of individual choices, and political action on a very large scale, and communities coming together as they’re doing here tonight is the necessary beginning.

A few sources and footnotes

[1] Numbers here come, with some rounding, from the Government’s Digest of UK Energy Statistics 2018, converted into kWh per day per head of population.

[2] It’s possible that some natural gas could be replaced by hydrogen gas, which burns without producing carbon dioxide. The hydrogen would itself be made from natural gas, in a process that itself produces carbon dioxide, but this carbon dioxide would be captured and stored indefinitely underground in depleted gas reservoirs (carbon capture and storage). For more on this, see note 6.

[3] These estimates come from the Committee on Climate Change’s Net Zero technical report. They assume that some gas for heating will be replaced by hydrogen.

[4] See my earlier post Moving beyond nuclear power’s troubled history for the background, and Rebooting the UK’s nuclear new build programme for how we ought to do things better.

[5] An additional issue for an electricity grid that relies very heavily on wind and solar is “intermittency”- the fact that the wind doesn’t blow and the sun doesn’t shine all the time. There are ways of getting round this to some extent through storage and redundancy. Advances in battery technology mean that storage may be able to handle some short term fluctuations – though not (for example) the difference between winter and summer yields of solar energy. But the closer the grid gets to being 100% renewable, the more expensive it will be to mitigate intermittency.

[6] Continuing to use fossil fuels like gas and coal to generate electricity but capturing and storing the carbon dioxide, is another possibility. I’ve explained elsewhere (Carbon Capture and Storage: technically possible, but politically and economically a bad idea) why I think this is unattractive, but we will probably need to use it for things that are very difficult to decarbonise otherwise (like cement-making, and possibly making hydrogen, as in note 2 above).

[7] See Land use: Reducing emissions and preparing for climate change from the Committee on Climate Change.

What do we mean by scientific productivity – and is it really falling?

This is the outline of a brief talk I gave as part of the launch of a new Research on Research Institute, with which I’m associated. The session my talk was in was called “PRIORITIES: from data to deliberation and decision-making
. How can RoR support prioritisation & allocation by governments and funders?”

I want to focus on the idea of scientific productivity – how it is defined, and how we can measure it – and whether it is declining – and if it is, what can we do about it?

The output of science increases exponentially, by some measures…

…but what do we get back from that? What is the productivity of the scientific enterprise – the output of the enterprise, as defined by some measure of the output of science per unit input?

It depends on what we think the output of science is, of course.

We could be talking of some measure of the new science being produced and its impact within the scientific community.

But I think many of us – from funders to the wider publics who support that science – might also want to look outside the scientific community. How can we measure the effectiveness with which scientific advances are translated into wider socio-economic goals? As the discourses of “grand challenges” and “mission driven” research become more widely taken up, how will we tell whether those challenges and missions have been met?

There is a gathering sense that the productivity of the global scientific endeavour is declining or running into diminishing returns. A recent article by Michael Nielsen and Patrick Collison asserted that “Science is getting less bang for its buck”, while a group of distinguished economists have answered in the affirmative their own question: “Are ideas getting harder to find?” This connects to the view amongst some economists, that we have seen the best of economic growth and are living in a new age of stagnation.

Certainly the rate of innovation in some science-led industries seems to be slowing down. The combination of Moore’s law and Dennard scaling which brought us exponential growth in computing power in the 80’s and 90’s started to level off around 2004 and has since slowed to a crawl, despite continuing growth in resources devoted to it.

If we measure the productivity of R&D in the pharmaceutical industry as the number of new drugs produced by billion dollars of expenditure, that productivity has been exponentially falling for decades.

What of the impact of R&D expenditure on the wider economy? We expect more R&D to lead to more innovation and more innovation to lead to economic growth. But the general economic backgrounds – across the developed world, but perhaps worse in the UK than in competitors (apart from Italy), total factor productivity – (regarded as a measure of innovation in its widest sense) has stalled since the global financial crisis. Stalling productivity growth has led to stalling wage growth – this directly feeds into people’s living standards, almost certainly contributing to the sour political times we live in.

We don’t just expect science and technology to make us richer – we hope that it helps us to lead longer and healthier lives. And here too progress has been stalling.

So I think there is a case that the productivity of science at the most macro level – if we measure it in terms of economic outcomes and some measures of well-being – is faltering.

How can the choices funders, institutions, and individual scientists affect this? That, I would argue, needs to be a central theme of research on research.

The first thing to recognise is that collectively, choices are being made, even if there’s no individual mastermind in charge of the whole enterprise.

Here are two examples of choices that the UK has made.

Firstly, we’ve seen a growing primacy of health as the goal of publically funded research. As we’ve seen, it’s not obvious that this emphasis has yielded the desired results in better health outcomes.

Secondly, we have chosen to concentrate research geographically – in London, Oxford and Cambridge. These are the most productive parts of a country which is enormously regionally unbalanced in terms of its economic performance. At the very least we can say that this concentration of research doesn’t help the left-behind regions catch up economically. I’d go further and say that the lack of diversity in where publicly funded science is done is actually a challenge both to its legitimacy and its effectiveness.

I think the question of what science is for becomes, in difficult times, more and more challenging.

The discourse of “grand challenges” and “missions” becomes more pressing. And who could disagree with the idea that decarbonising our energy systems, allowing everyone to have long and healthy lives, and spreading the economic benefits of science as widely as possible – both between nations and within them – should be, if not the only, but a big part of the reason why we – as funding bodies and society more widely – support and fund science.

But we do need to make sure that the priorities we select and the choices we make are the ones that lead most effectively to the delivery of these missions.

In the piece of work James Wilsdon and I did last year – “The Biomedical Bubble” – we began to ask some questions about how effective we are at setting research priorities which translate into the best outcomes in one particular area – health related research. This was undoubtedly a preliminary effort, but for us it exemplified what should be an important strand of the Research on Research agenda.

How do we make choices in a way that helps the scientific enterprise most effectively meet the expectations society puts onto it? I believe the Research on Research agenda needs to embrace these wider questions. We need to make explicit what outcomes we expect from science, we need to make explicit the choices we make, and we need to define the many dimensions of scientific productivity so we can do our best to improve them.

It’s the Industrial that enables the Artisanal

It’s come to this, even here. My village chippy has “teamed up” with a “craft brewery” in the next village to sell “artisanal ales” specially brewed to accompany one’s fish and chips. This prompts me to reflect – is this move from the industrial to the artisanal really a reversion to a previous, better world? I don’t think so – instead, craft beer is itself a product of modernity. It depends on capital equipment that is small scale, but dependent on high technology – on stainless steel, electrical heating and refrigeration, computer powered process control. And its ingredients aren’t locally grown and processed – the different flavours introduced by new hop varieties are the outcome of world trade. What’s going on here is not a repudiation of industrialisation, but its miniaturisation, the outcome of new technologies which erode previous economies of scale.

A craft beer from the Eyam Brewery, on sale at the Toll Bar Fish and Chip Shop, Stoney Middleton, Derbyshire.

Beer was one of the first industrial foodstuffs. In Britain, the domestic scale of early beer making began to be replaced by factory scale breweries in the 18th century, as soon as transport improved enough to allow the distribution of their products beyond their immediate locality. Burton-on-Trent was an early centre, whose growth was catalysed by the opening up of the Trent navigation in 1712. This allowed beer to be transported by water via Hull to London and beyond. By the late 18th century some 2000 barrels a year of Burton beer were being shipped to Baltic ports like Danzig and St Petersburg.

Like other process industries, this expansion was driven by fossil fuels. Coal from the nearby Staffordshire and Derbyshire coalfields provided process heat. The technological innovation of coking, which produced a purer carbon fuel which burnt without sulphur containing fumes, was developed as early as 1640 in Derby, so coal could be used to dry malt without introducing off-flavours (this use of coke long predated its much more famous use as a replacement for charcoal in iron production).

By late 19th century, Burton on Trent had become a world centre of beer brewing, producing more than 500 million litres a year, for distribution by the railway network throughout the country and export across the world. This was an industry that was fossil fuel powered and scientifically managed. Coal powered steam engines pumped the large volumes of liquid around, steam was used to provide controllable process heat, and most crucially the invention of refrigeration was the essential enabler of year-round brewing, allowing control of temperature in the fermentation process, by-now scientifically understood by the cadre of formally trained chemists employed by the breweries. In a pint of Marston’s Pedigree or a bottle of Worthington White Shield, what one is tasting is the outcome of the best of 19th century food industrialisation, the mass production of high quality products at affordable prices.

How much of the “craft beer revolution” is a departure from this industrial past? The difference is one of scale – steam engines are replaced by electric pumps, coal fired furnaces by heating elements, and master brewers by thermostatic control systems. Craft beer is not a return to preindustrial, artisanal age – instead it’s based on industrial techniques, miniaturised with new technology, and souped up by the products of world trade. This is a specific example of a point more generally made in Rachel Laudan’s excellent book “Cuisine and Empire” – so-called artisanal food comes after industrial food, and is in fact enabled by it.

What more general lessons can we learn from this example? The energy economy is another place where some people are talking about a transition from a system that is industrial and centralised to one that is small scale and decentralised – one might almost say “artisanal”. Should we be aiming for a new decentralised energy system – a world of windmills and solar cells and electric bikes and community energy trusts?

To some extent, I think this is possible and indeed attractive, leading to a greater sense of control and involvement by citizens in the provision of energy. But we should be under no illusions – this artisanal also has to be enabled by the industrial.

The products that such decentralised energy will depend on are the very opposite of artisanal – we’re not talking here about water-wheels crafted by the local mill-wright, but the products of some of the most sophisticated technological systems in the world. A solar cell is made of silicon, which itself is just made of sand. But in getting from sand to solar silicon, one goes through a set of processes that multiply the value of the material by four orders of magnitude – reduction with carbon to 99% pure (metallurgical) silicon, then the Siemens trichlorosilane process to produce poly-silicon, then Czochralski crystal growth to produce single crystals whose purity is expressed by the number of 9’s after the decimal point, with 99.9999% a bare minimum. Solar cells, batteries for bicycles and cars, the generators in wind-turbines, the power electronics to control it all, these are all sophisticated technological products that must be obtained through world trade.

And given that this infrastructure must be the product of industrial economies, will these local energy systems be generating enough surplus energy for export to run this industry? I don’t think so – so in addition to these local systems we will continue to need industrial scale low carbon energy sources – and in my opinion the heavy lifting here has to be done by a combination of offshore wind and nuclear.

The right questions to ask when we’re considering these questions of centralisation vs decentralisation come down to economies of scale – how do they arise, and how does the introduction of new technology change the balance of economies of scale? Failing to get this right leads to catastrophic mistakes like Mao’s “great leap forward” with its village blast furnaces – but there will be times, like craft beer, when new technology makes it possible to decentralise manufacturing and production.

The Stoney chip shop craft beer is, by the way, very good. And so are the fish and chips!

Carbon Capture and Storage: technically possible, but politically and economically a bad idea

It’s excellent news that the UK government has accepted the Climate Change Committee’s recommendation to legislate for a goal of achieving net zero greenhouse emissions by 2050. As always, though, it’s not enough to will the end without attending to the means. My earlier blogpost stressed how hard this goal is going to be to reach in practise. The Climate Change Committee does provide scenarios for achieving net zero, and the bad news is that the central 2050 scenario relies to a huge extent on carbon capture and storage. In other words, it assumes that we will still be burning fossil fuels, but we will be mitigating the effect of this continued dependence on fossil fuels by capturing the carbon dioxide released when gas is burnt and storing it, into the indefinite future, underground. Some use of carbon capture and storage is probably inevitable, but in my view such large-scale reliance on it is, politically and economically, a bad idea.

In the central 2050 net zero scenario, 645 TWh of electricity is generated a year – more than doubled from 2017 value of 300 TWh, reflecting the electrification of sectors like transport. The basic strategy for deep decarbonisation has to be, as a first approximation, to electrify everything, while simultaneously decarbonising power generation: so far, so good.

But even with aggressive expansion of renewable electricity, this scenario still calls for 150 TWh to be generated from fossil fuels, in the form of gas power stations. To achieve zero carbon emissions from this fossil fuel powered electricity generation, the carbon dioxide released when the gas is burnt has to be captured at the power stations and pumped through a specially built infrastructure of pipes to disused gas fields in the North Sea, where it is injected underground for indefinite storage. This is certainly technically feasible – to produce 150 TWh of electricity from gas, around 176 million tonnes of carbon dioxide a year will be produced. For comparison currently about 42 million tonnes of natural gas a year is taken out of the North Sea reservoirs, so reversing the process at four times the scale is undoubtedly doable.

In fact, more carbon capture and storage will be needed than the 176 million tonnes from the power sector, because the zero net greenhouse gas plan relies on it in four distinct ways. In addition to allowing us to carry on burning gas to make electricity, the plan envisages capturing carbon dioxide from biomass-fired power stations too. This should lead to a net lowering of the amount of carbon dioxide in the atmosphere, amounting to a so-called “negative emissions technology”. The idea of these is one offsets the remaining positive carbon emissions from hard to decarbonise sectors like aviation with these “negative emissions” to achieve overall net zero emissions.

Meanwhile the plan envisages the large scale conversion of natural gas to hydrogen, to replace natural gas in industry and domestic heating. One molecule of methane produces two molecules of hydrogen, which can be burnt in domestic boilers without carbon emissions, and one of carbon dioxide, which needs to be captured at the hydrogen plant and pumped away to the North Sea reservoirs. Finally some carbon dioxide producing industrial processes will remain – steel making and cement production – and carbon capture and storage will be needed to render these processes zero carbon. These latter uses are probably inevitable.

But I want to focus on the principal envisaged use of carbon capture and storage – as a way of avoiding the need to move to entirely low carbon electricity, i.e. through renewables like wind and solar, and through nuclear power. We need to take a global perspective – if the UK achieves net zero greenhouse gas status by 2050, but the rest of the world carries on as normal, that helps no-one.

In my opinion, the only way we can be sure that the whole world will decarbonise is if low carbon energy – primarily wind, solar and nuclear – comes in at a lower cost than fossil fuels, without subsidies or other intervention. The cost of these technologies will surely come down: for this to happen, we need both to deploy them in their current form, and to do research and development to improve them. We need both the “learning by doing” that comes from implementation, and the cost reductions that will come from R&D, whether that’s making incremental process improvements to the technologies as they currently stand, or developing radically new and better versions of these technologies.

But we will never achieve these technological improvements and corresponding cost reductions for carbon capture and storage.

It’s always tempting fate to say “never” for the potential for new technologies – but there’s one exception, and that’s when a putative new technology would need to break one of the laws of thermodynamics. No-one has ever come out ahead betting against these.

To do carbon capture and storage will always need additional expenditure over and above the cost of an unabated gas power station. It needs both:

  • up-front capital costs for the plant to separate the carbon dioxide in the first place, infrastructure to pipe the carbon dioxide long distances and pump it underground,
  • lowered conversion efficiencies and higher running costs – i.e. more gas needs to be burnt to produce a given unit of electricity.
  • The latter is an inescapable consequence of the second law of thermodynamics – carbon capture will always need a separation step. Either one needs to take air and separate it into its component parts, taking out the pure oxygen, so one burns gas to produce a pure waste stream consisting of carbon dioxide and water. Or one has to take the exhaust from burning the gas in air, and pull out the carbon dioxide from the waste. Either way, you need to take a mixed gas and separate its components – and that always takes an energy input to drive the loss of entropy that follows from separating a mixture.

    The key point, then, is that no matter how much better our technology gets, power produced by a gas power station with carbon capture and storage will always be more expensive that power from unabated gas. The capital cost of the plant will be greater, and so will the revenue cost per kWh. No amount of technological progress can ever change this.

    So there can only be a business case for carbon capture and storage through significant government interventions in the market, either through a subsidy, or through a carbon tax. Politically, this is an inherently unstable situation. Even after the capital cost of the carbon capture infrastructure has been written off, at any time the plant operator will be able to generate electricity more cheaply by releasing the carbon dioxide produced when the gas is burnt. Taking an international perspective, this leads to a massive free rider problem. Any country will be able to gain a competitive advantage at any time by turning the carbon capture off – there needs to be a fully enforced international agreement to impose carbon taxes at a high enough level to make the economics work. I’m not confident that such an agreement – which would have to cover every country making a significant contribution to carbon emissions to be effective – can be relied to hold on the scale of many decades.

    I do accept that some carbon and capture and storage probably is essential, to capture emissions from cement and steel production. But carbon capture and storage from the power sector is a climate change solution for a world that does not exist any more – a world of multilateral agreements and transnational economic rationality. Any scenario that relies on carbon capture and storage is just a politically very risky way of persuading ourselves that fossil-fuelled business as usual is sustainable, and postponing the necessary large scale implementation and improvement through R&D of genuine low carbon energy technologies – renewables like wind and solar, and nuclear.

    A Resurgence of the Regions: rebuilding innovation capacity across the whole UK

    The following is the introduction to a working paper I wrote while recovering from surgery a couple of months ago. This brings together much of what I’ve been writing over the last year or two about productivity, science and innovation policy and the need to rebalance the UK’s innovation system to increase R&D capacity outside London and the South East. It discusses how we should direct R&D efforts to support big societal goals, notably the need to decarbonise our energy supply and refocus health related research to make sure our health and social care system is humane and sustainable. The full (53 page) paper can be downloaded here.

    We should rebuild the innovation systems of those parts of the country outside the prosperous South East of England. Public investments in new translational research facilities will attract private sector investment, bring together wider clusters of public and business research and development, institutions for skills development, and networks of expertise, boosting innovation and leading to productivity growth. In each region, investment should be focused on industrial sectors that build on existing strengths, while exploiting opportunities offered by new technology. New capacity should be built in areas like health and social care, and the transition to low carbon energy, where the state can use its power to create new markets to drive the innovation needed to meet its strategic goals.

    This would address two of the UK’s biggest structural problems: its profound disparities in regional economic performance, and a research and development intensity – especially in the private sector and for translational research – that is low compared to competitors. By focusing on ‘catch-up’ economic growth in the less prosperous parts of the country, this plan offers the most realistic route to generating a material change in the total level of economic growth. At the same time, it should make a major contribution to reducing the political and social tensions that have become so obvious in recent years.

    The global financial crisis brought about a once-in-a-lifetime discontinuity in the rate of growth of economic quantities such as GDP per capita, labour productivity and average incomes; their subsequent decade-long stagnation signals that this event was not just a blip, but a transition to a new, deeply unsatisfactory, normal. A continuation of the current policy direction will not suffice; change is needed.

    Our post-crisis stagnation has more than one cause. Some sources of pre-crisis prosperity have declined, and will not – and should not – come back. North Sea oil and gas production peaked around the turn of the century. Financial services provided a motor for the economy in the run-up to the global financial crisis, but this proved unsustainable.

    Beyond the unavoidable headwinds imposed by the end of North Sea oil and the financial services bubble, the wider economy has disappointed too. There has been a general collapse in total factor productivity growth – the economy is less able to create higher value products and services from the same inputs than in previous decades. This is a problem of declining innovation in its broadest sense.

    There are some industry-specific issues. The pharmaceutical industry, for example, has been the UK’s leading science-led industry, and a major driver of productivity growth before 2007; this has been suffering from a world-wide malaise, in which lucrative new drugs seem harder and harder to find.

    Yet many areas of innovation are flourishing, presenting opportunities to create new, high value products and services. It’s easy to get excited about developments in machine learning, the ‘internet of things’ and ‘Industrie 4.0’, in biotechnology, synthetic biology and nanotechnology, in new technologies for generating and storing energy.

    But the productivity data shows that UK companies are not taking enough advantage of these opportunities. The UK economy is not able to harness innovation at a sufficient scale to generate the economic growth we need.

    Up to now, the UK’s innovation policy had been focused on academic science. We rightly congratulate ourselves on the strength of our science base, as measured by the Nobel prizes won by UK-based scientists and the impact of their publications.

    Despite these successes, the UK’s wider research and development base suffers from three faults:
    • It is too small for the size of our economy, as measured by R&D intensity,
    • It is particularly weak in translational research and industrial R&D,
    • It is too geographically concentrated in the already prosperous parts of the country.

    Science policy has been based on a model of correcting market failure, with an overwhelming emphasis on the supply side – ensuring strong basic science and a supply of skilled people. We need to move from this ‘supply side’ science policy to an innovation policy that explicitly creates demand for innovation, in order to meet society’s big strategic goals.

    Historically, the main driver for state investment in innovation has been defence. Today, the largest fraction of government research and development supports healthcare – yet this is not done in a way that most effectively promotes either the health of our citizens or the productivity of our health and social care system.

    Most pressingly, we need innovation to create affordable low carbon energy. Progress towards decarbonising our energy system is not happening fast enough, and innovation is needed to decrease the price of low carbon energy and increase its scale, and increase energy efficiency.

    More attention needs to be paid to the wider determinants of innovation – organisation, management quality, skills, and the diffusion of innovation as much as discovery itself. We need to focus more on the formal and informal networks that drive innovation – and in particular on the geographical aspects of these networks. They work well in Cambridge – why aren’t they working in the North East or in Wales?

    We do have examples of new institutions that have catalysed the rebuilding of innovation systems in economically lagging parts of the country. Translational research institutions such as Coventry’s Warwick Manufacturing Group, and Sheffield’s Advanced Manufacturing Research Centre, bring together university researchers and workers from companies large and small, help develop appropriate skills at all levels, and act as a focus for inward investment.

    These translational research centres offer models for new interventions that will raise productivity levels in many sectors – not just in traditional ‘high technology’ sectors, but also in areas of the foundational economy such as social care. They will drive the innovation needed to create an affordable, humane and effective healthcare system. We must also urgently reverse decades of neglect by the UK of research into new sustainable energy systems, to hasten the overdue transition to a low carbon economy. Developing such centres, at scale, will do much to drive economic growth in all parts of the country.

    Continue to read the full (53 page) paper here (PDF).

    The climate crisis now comes down to raw power

    Fifteen years ago it was possible to be optimistic about the world’s capacity to avert the worst effects of climate change. The transition to low carbon energy was clearly going to be challenging and it probably wasn’t going to be fast enough. But it did seem to be going with the grain of the evolution of the world’s energy economy, in this sense: oil prices seemed to be on an upward trajectory, squeezed between the increasingly constrained supplies predicted by “peak oil” theories, and the seemingly endless demand driven by fast developing countries like China and India. If fossil fuels were on a one-way upward trajectory in price and availability, then renewable energy would inevitably take its place – subsidies might bring forward their deployment, but the ultimate destination of a decarbonised energy system was assured.

    The picture looks very different today. Oil prices collapsed in the wake of the global financial crisis, and after a short recovery have now fallen below the most pessimistic predictions of a decade ago. This is illustrated in my plot, which shows the long-term evolution of real oil prices. As Vaclav Smil has frequently stressed, long range forecasting of energy trends is a mugs game, and this is well-illustrated in my plot, which shows successive decadal predictions of oil prices by the USA’s Energy Information Agency.


    Successive predictions for future oil prices made by the USA’s EIA in 2000 and 2010, compared to the actual outcome up to 2016.

    What underlies this fall in oil prices? On the demand side, this partly reflects slower global economic growth than expected. But the biggest factor has been a shock on the supply side – the technological revolution behind fracking and the large-scale exploitation of tight oil, which has pushed the USA ahead of Saudi Arabia as the world’s largest producer of oil. The natural gas supply situation has been transformed, too, through a combination of fracking in the USA and the development of a long-distance market in LNG from giant reservoirs in places like Qatar and Iran. Since 1997, world gas consumption has increased by 25% – but the size of proven reserves has increased by 50%. The uncomfortable truth is that we live in a world awash with cheap hydrocarbons. As things now stand, economics will not drive a transition to low carbon energy.

    A transition to low carbon energy will, as things currently stand, cost money. Economist Jean Pisani-Ferry puts this very clearly in a recent article“let’s be clear: the green transition will not be a free lunch … we’ll be putting a price on something that previously we’ve enjoyed for free”. Of course, this reflects failings of the market economy that economics already understands. If we heat our houses by burning cheap natural gas rather than installing an expensive ground-source heat pump and running that off electricity from offshore wind, we get the benefit of saving money, but we impose the costs of the climate change we contribute to on someone else entirely (perhaps the Bangladesh delta dweller whose village gets flooded). And if we are moved to pay out for the low-carbon option, the sense of satisfaction our ethical superiority over our gas-guzzling neighbours gives us might be tempered by resentment of their greater disposable income.

    The problems of uncosted externalities and free riders are well known to economists. But just because the problems are understood, it doesn’t mean they have easy solutions. The economist’s favoured remedy is a carbon tax, which puts a price on the previously uncosted deleterious effects of carbon emissions on the climate, but leaves the question of how best to mitigate the emissions to the market. It’s an elegant and attractive solution, but it suffers from two big problems.

    The first is that, while it’s easy to state that emitting carbon imposes costs on the rest of the world, it’s very difficult to quantify what those costs are. The effects of climate change are uncertain, and are spread far into the future. We can run a model which will give us a best estimate of what those costs might be, but how much weight should we give to tail risk – the possibility that climate change leads to less likely, but truly catastrophic outcomes? What discount rate – if any – should we use, to account for the fact that we value things now more than things in the future?

    The second is that we don’t have a world authority than can impose a single tax uniformly. Carbon emissions are a global problem, but taxes will be imposed by individual nations, and given the huge and inescapable uncertainty about what the fair level of a carbon tax would be, it’s inevitable than countries will impose carbon taxes at the low end of the range, so they don’t disadvantage their own industries and their own consumers. This will lead to big distortions of global trade, as countries attempt to combat “carbon leakage”, where goods made in countries with lower carbon taxes undercut goods which more fairly price the carbon emitted in their production.

    The biggest problems, though, will be political. We’ve already seen, in the “Gilets Jaune” protests in France, how raising fuel prices can be a spark for disruptive political protest. Populist, authoritarian movements like those led by Trump in the USA are associated with enthusiasm for fossil fuels like coal and oil and a downplaying or denial of the reality of the link between climate change and carbon emissions. To state the obvious, there are very rich and powerful entities that benefit enormously from the continued production and consumption of fossil fuels, whether those are nations, like Saudi Arabia, Australia, the USA and Russia, or companies like ExxonMobil, Rosneft and Saudi Aramco (the latter two, as state owned enterprises, blurring the line between the nations and the companies).

    These entities, and those (many) individuals who benefit from them, are the enemies of climate action, and oppose, from a very powerful position of incumbency, actions that lesson our dependence on fossil fuels. How do these groups square this position with the consensus that climate change driven by carbon emissions is serious and imminent? Here, again, I think the situation has changed since 10 or 15 years or so ago. Then, I think many climate sceptics did genuinely believe that anthropogenic climate change was unimportant or non-existent. The science was complicated, it was plausible to find a global warming hiatus in the data, the modelling was uncertain – with the help of a little motivated reasoning and confirmation bias, a sceptical position could be reached in good faith.

    I think this is much less true now, with the warming hiatus well and truly over. What I now suspect and fear is that the promoters of and apologists for continued fossil fuel burning know well that we’re heading towards a serious mid-century climate emergency, but they are confident that, from a position of power, their sort will be able to get through the emergency. With enough money, coercive power, and access to ample fossil fuel energy, they can be confident that it will be others that suffer. Bangladesh may disappear under the floodwaters and displace millions, but rebuilding Palm Beach won’t be a problem.

    We now seem to be in a world, not of peak oil, but of a continuing abundance of fossil fuels. In these circumstances, perhaps it is wrong to think that economics can solve the problem of climate change. It is a now a matter of raw power.

    Is there an alternative to this bleak conclusion? For many the solution is innovation. This is indeed our best hope – but it is not sufficient simply to incant the word. Nor is the recent focus in research policy on “grand challenges” and “missions” by itself enough to provide an implementation route for the major upheaval in the way our societies are organised that a transition to zero-carbon energy entails. For that, developing new technology will certainly be important, and we’ll need to understand how to make the economics of innovation work for us, but we can’t be naive about how new technologies, economics and political power are entwined.

    Rebooting the UK’s nuclear new build programme

    80% of our energy comes from burning fossil fuels, and that needs to change, fast. By the middle of this century we need to be approaching net zero carbon emissions, if the risk of major disruption from climate change is to be lowered – and the middle of this century is not very far away, when measured in terms of the lifetime of our energy infrastructure.

    My last post – If new nuclear doesn’t get built, it will be fossil fuels, not renewables, that fill the gap – tried to quantify the scale of the problem – all our impressive recent progress in implementing wind and solar energy will be wiped out by the loss of 60 TWh/ year of low-carbon energy that will happen over the next decade as the UK’s fleet of Advanced Gas Cooled Reactors are retired, and even with the most optimistic projections for the growth of wind and solar, without new nuclear build the prospect of decarbonising our electricity supply remains distant. And, above all, we always need to remember that the biggest part of our energy consumption comes from directly burning oil and gas – for transport, industry and domestic heating – and this needs to be replaced by more low carbon electricity. We need more nuclear energy.

    The UK’s current nuclear new build plans are in deep trouble

    All but one of our existing nuclear power stations will be shut down by 2030 – only the Pressurised Water Reactor at Sizewell B, rated at 1.2 GW will remain. So, without any new nuclear power stations opening, around 60 TWh a year of low carbon energy will be lost. What is the current status of our nuclear new build program? Here’s where we are now:

  • Hinkley point C – 3.2 GW capacity, consisting of 2 Areva EPR units, is currently under construction, with the first unit due to be completed by the end of 2025
  • Sizewell C – 3.2 GW capacity, consisting of 2 Areva EPR units, would be a duplicate of Hinkley C. The design is approved, but the project awaits site approval and an investment decision.
  • Bradwell B – 2-3 GW capacity. As part of the deal for Chinese support for Hinkley C, it was agreed that the Chinese state nuclear corporation CGN would install 2 (or possibly 3) Chinese designed pressurised water reactors, the CGN HPR1000. Generic Design Assessment of the reactor type is currently in progress, site approval and final investment decision needed
  • Wylfa – 2.6 GW,2 x 1.3 GW Hitachi ABWR. Generic Design Assessment has been completed, but the project has been suspended by the key investors, Hitachi.
  • Oldbury – 2.6 GW,2 x 1.3 GW Hitachi ABWR. A duplicate of Wylfa, project suspended.
  • Moorside, Cumbria, 3.4 GW, 3 x 1.1 Westinghouse AP1000, GDA completed, but the project has been suspended by its key investors, Toshiba.
  • So this leaves us with three scenarios for the post-2030 period.

    We can, I think, assume that Hinkley C is definitely happening – if that is the limit of our expansion of nuclear power, we’ll end up with about 24 TWh a year of low carbon electricity from nuclear, less than half the current amount.

    With Sizewell C and Bradwell B, which are currently proceeding, though not yet finalised, we’ll have 78 TWh a year – this essentially replaces the lost capacity from our AGR fleet, with a small additional margin.

    Only with the currently suspended projects – at Wylfa, Oldbury, and Moorside, would we be substantially increasing nuclear’s contribution to low carbon electricity, roughly doubling the current contribution at 143 TWh per year.

    Transforming the economics of nuclear power

    Why is nuclear power so expensive – and how can it be made cheaper? What’s important to understand about nuclear power is that its costs are dominated by the upfront capital cost of building a nuclear power plant, together with the provision that has to be made for safely decommissioning the plant at the end of its life. The actual cost of running it – including the cost of the nuclear fuel – is, by comparison, quite small.

    Let’s illustrate this with some rough indicative figures. The capital cost of Hinkley C is about £20 billion, and the cost of decommissioning it at the end of its 60 year expected lifespan is £8 billion. For the investors to receive a guaranteed return of 9%, the plant has to generate a cashflow of £1.8 billion a year to cover the cost of capital. If the plant is able to operate at 90% capacity, this amounts to about £72 a MWh of electricity produced. If one adds on the recurrent costs – for operation and maintenance, and the fuel cycle – of about £20 a MWh, this gets one to the so-called “strike price” – which in the terms of the deal with the UK government the project has been guaranteed – of £92 a MWh.

    Two things come out from this calculation – firstly, this cost of electricity is substantially more expensive than the current wholesale price (about £62 per MWh, averaged over the last year). Secondly, nearly 80% of the price covers the cost of borrowing the capital – and 9% seems like quite a high rate at a time of historically low long-term interest rates.

    EDF itself can borrow money on the bond market for 5%. At 5%, the cost of financing the capital comes to about £1.1 billion a year, which would be achieved at an electricity price of a bit more than £60 a MWh. Why the difference? In effect, the project’s investors – the French state owned company EDF, with a 2/3 stake, the rest being held by the Chinese state owned company CGN – receive about £700 million a year to compensate them for the risks of the project.

    Of course, the UK state itself could have borrowed the money to finance the project. Currently, the UK government can borrow at 1.75% fixed for 30 years. At 2%, the financing costs would come down from £1.8 billion a year to £0.7 billion a year, requiring a break-even electricity price of less than £50 a MWh. Of course, this requires the UK government to bear all the risk for the project, and this comes at a price. It’s difficult to imagine that that price is more than £1 billion a year, though.

    If part of the problem of the high cost of nuclear energy comes from the high cost of capital baked into the sub-optimal way the Hinkley Point deal has been structured, it remains the case that the capital cost of the plant in the first place seems very high. The £20 billion cost of Hinkley Point is indeed high, both in comparison to the cost of previous generations of nuclear power stations, and in comparison with comparable nuclear power stations built recently elsewhere in the world.

    Sizewell B cost £2 billion at 1987 prices for 1.2 GW of capacity – scaling that up to 3.2 GW and putting it in current money suggests that Hinkley C should cost about £12 billion.

    Some of the additional cost can undoubtedly be ascribed to the new safety features added to the EPR. The EPR is an evolution of the original pressurised water reactor design; all pressurised water reactors – indeed all light water reactors (which use ordinary, non-deuterated, water as both moderator and coolant) – are susceptible to “loss of coolant accidents”. In one of these, if the circulating water is lost, even though the nuclear reaction can be reliably shut down, the residual heat from the radioactive material in the core can be great enough to melt the reactor core, and to lead to steam reacting with metals to create explosive hydrogen.

    The experience of loss of coolant accidents at Three Mile Island and (more seriously) Fukushima has prompted new so-called generation III or gen III+ reactors to incorporate a variety of new features to mitigate potential loss-of-coolant accidents, including methods for passive backup cooling systems and more layers of containment. The experience of 9/11 has also prompted designs to consider the effect of a deliberate aircraft crash into the building. All these extra measures cost money.

    But even nuclear power plants of the same design cost significantly more to build in Europe and the USA than they do in China or Korea – more than twice as much, in fact. Part of this is undoubtedly due to higher labour costs (including both construction workers and engineers and other professionals). But there are factors leading to these other countries’ lower costs that can be emulated in the UK – they arise from the fact that both China and Korea have systematically got better at building reactors by building a sequence of them, and capturing the lessons learnt from successive builds.

    In the UK, by contrast, no nuclear power station has been built since 1995, so in terms of experience we’re starting from scratch. And our programme of nuclear new build could hardly have been designed in a way that made it more difficult to capture these benefits of learning, with four quite different designs being built by four different sets of contractors.

    We can learn the lessons of previous experiences of nuclear builds. The previous EPR installations in Olkiluoto, Finland, and Flamanville, France – both of which have ended up hugely over-budget and late – indicate what mistakes we should avoid, while the Korean programme – which is to-date the only significant nuclear build-out to significantly reduce capital costs over the course of the programme – offers some more positive lessons. To summarise –

  • The design needs to be finalised before building work begins – late changes impose long delays and extra costs;
  • Multiple units should be installed on the same site;
  • A significant effort to develop proven and reliable supply chains and a skilled workforce pays big dividends;
  • Poor quality control and inadequate supervision of sub-contractors leads to long delays and huge extra costs;
  • A successful national nuclear programme makes sequential installation of identical designs on different sites, retaining the learning and skills of the construction teams;
  • Modular construction and manufacturing techniques should be used as much as possible.
  • The last point supports the more radical idea of making the entire reactor in a factory rather than on-site. This has the advantage of ensuring that all the benefits of learning-by-doing are fully captured, and allows much closer control over quality, while making easier the kind of process innovation that can make significant reductions in manufacturing cost.

    The downside is that this kind of modular manufacturing is only possible for reactors on a considerably smaller scale than the >1 GW capacity units that conventional programmes install – these “Small Modular Reactors” – SMRs – will be in the range of 10’s to 100’s MW. The driving force for increasing the scale of reactor units has been to capture economies of scale in running costs and fuel efficiencies. SMRs will sacrifice some of these economies of scale, with the promise of compensating economies of learning that will drive down capital costs enough to compensate. Given that, for current large scale designs, the total cost of electricity is dominated by the cost of capital, this is an argument that is at least plausible.

    What the UK should do to reboot its nuclear new build programme

    If the UK is to stand any chance at all of reducing its net carbon emissions close to zero by the middle of the century, it needs both to accelerate offshore wind and solar, and get its nuclear new build programme back on track.

    It was always a very bad idea to try and implement a nuclear new build programme with more than one reactor type. Now that the Hinkley Point C project is underway, our choice of large reactor design has in effect been made – it is the Areva EPR.

    The EPR is undoubtedly a complex and expensive design, but I don’t think there is any evidence that it is fundamentally different in this from other Gen III+ designs. Recent experience of building the rival Westinghouse AP1000 design in the USA doesn’t seem to be any more encouraging. On the other hand, the suggestion of some critics that the EPR is fundamentally “unbuildable” has clearly been falsified by the successful completion of an EPR unit in Taishan, China – this was connected to the grid in December last year. The successful building of both EPRs and AP1000s in China suggest, rather, that the difficulties seen in Europe and the USA arise from systematic problems of the kind discussed in the last section rather than a fundamental flaw in any particular reactor design.

    The UK should therefore do everything to accelerate the Sizewell C project, where two more EPRs are scheduled to be built. This needs to happen on a timescale that ensure that there is continuity between the construction of Hinkley C and Sizewell C, to retain the skills and supply chains that are developed and to make sure all the lessons learnt in the Hinkley build are acted on. And it should be financed in a way that’s less insanely expensive than the arrangements for Hinkley Point C, accepting the inevitability that the UK government will need to take a considerable stake in the project.

    In an ideal world, every other large nuclear reactor built in the UK in the current programme should also be an EPR. But a previous government apparently made a commitment to the Chinese state-owned enterprise CGN that, in return for taking a financial stake in the Hinkley project, it should be allowed to build a nuclear power station at Bradwell, in Essex, using the Chinese CGN HPR1000 design. I think it was a bad idea on principle to allow a foreign government to have such close control of critical national infrastructure, but if this decision has to stand, one can find silver linings. We should respect and learn from the real achievements of the Chinese in developing their own civil nuclear programme. If the primary motivation of CGN in wanting to build an HPR1000 is to improve its export potential by demonstrating its compliance with the UK’s independent and rigorous nuclear regulations, then that goal should be supported.

    We should speed up replacement plans to develop the other three sites – Wylfa, Oldbury and Moorside. The Wylfa project was the furthest advanced, and a replacement scheme based on installing two further EPR units there should be put together to begin shortly after the Sizewell C project, designed explicitly to drive further savings in capital costs by maximising learning by doing.

    The EPR is not a perfect technology, but we can’t afford to wait for a better one – the urgency of climate change means that we have to start building right now. But that doesn’t mean we should accept that no further technological progress is possible. We have to be clear about the timescales, though. We need a technology that is capable of deployment right now – and for all the reasons given above, that should be the EPR – but we need to be pursuing future technologies both at the demonstration stage, and at the earlier stages of research and development. Technologies ready for demonstration now might be deployed in the 2030’s, while anything that’s still in the R&D stage now realistically is not likely to be ready to be deployed until 2040 or so.

    The key candidate for a demonstration technology is a light water small modular reactor. The UK government has been toying with the idea of small modular reactors since 2015, and now a consortium led by Rolls-Royce has developed a design for a modular 400 MW pressurised water reactor, with an ambition to enter the generic design approval process in 2019 and to complete a first of a kind installation by 2030.

    As I discussed above, I think the arguments for small modular are at the very least plausible, but we won’t know for sure how the economics work out until we try to build one. Here the government needs to play the important role of being a lead customer and commission an experimental installation (perhaps at Moorside?).

    The first light water power reactors came into operation in 1960 and current designs are direct descendents of these early precursors; light water reactors have a number of sub-optimal features that are inherent to the basic design, so this is an instructive example of technological lock-in keeping us on a less-than-ideal technological trajectory.

    There are plenty of ideas for fission reactors that operate on different principles – high temperature gas cooled reactors, liquid salt cooled reactors, molten salt fuelled reactors, sodium fast reactors, to give just a few examples. These concepts have many potential advantages over the dominant light water reactor paradigm. Some should be intrinsically safer than light water reactors, relying less on active safety systems and more on an intrinsically fail-safe design. Many promise better nuclear fuel economy, including the possibility of breeding fissile fuel from non-fissile elements such as thorium. Most would operate at higher temperatures, allowing higher conversion efficiencies and the possibility of using the heat directly to drive industrial processes such as the production of hydrogen.

    But these concepts are as yet undeveloped, and it will produce many years and much money to convert them into working demonstrators. What should the UK’s role in this R&D effort be? I think we need to accept the fact that our nuclear fission R&D effort has been so far run down that it is not realistic to imagine that the UK can operate independently – instead we should contribute to international collaborations. How best to do that is a big subject beyond the scope of this post.

    There are no easy options left

    Climate change is an emergency, yet I don’t think enough people understand how difficult the necessary response – deep decarbonisation of our energy systems – will be. The UK has achieved some success in lowering the carbon intensity of its economy. Part of this has come from, in effect, offshoring our heavy industry. More real gains have come from switching electricity generation from coal to gas, while renewables – particularly offshore wind and solar – have seen impressive growth.

    But this has been the easy part. The transition from coal to gas is almost complete, and the ambitious planned build-out of offshore wind to 2030 will have occupied a significant fraction of the available shallow water sites. Completing the decarbonisation of our electricity sector without nuclear new build will be very difficult – but even if that is achieved, that doesn’t even bring us halfway to the goal of decarbonising our energy economy. 60% of our current energy consumption comes from directly burning oil – for cars and trucks – and gas – for industry and heating our homes – much of this will need to be replaced by low-carbon energy, meaning that our electricity sector will have to be substantially increased.

    Other alternative low carbon energy sources are unpalatable or unproven. Carbon capture and storage has never yet deployed at scale, and represents a pure overhead on existing power generation technologies, needing both a major new infrastructure to be built and increased running costs. Scenarios that keep global warming below 2° C need so called “negative emissions technologies” – which don’t yet exist, and make no economic sense without a degree of worldwide cooperation which seems difficult to imagine at the moment.

    I understand why people are opposed to nuclear power – civil nuclear power has a troubled history, which reflect its roots in the military technologies of nuclear weapons, as I’ve discussed before. But time is running out, and the necessary transition to a zero carbon energy economy leaves us with no easy options. We must accelerate the deployment of renewable energies like wind and solar, but at the same time move beyond nuclear’s troubled history and reboot our nuclear new build programme.

    Notes on sources
    For an excellent overall summary of the mess that is the UK’s current new build programme, see this piece by energy economist Dieter Helm. For the specific shortcomings of the Hinkley Point C deal, see this National Audit Office report (and at the risk of saying, I told you so, this is what I wrote 5 years ago: The UK’s nuclear new build: too expensive, too late). For the lessons to be learnt from previous nuclear programmes, see Nuclear Lessons Learnt, from the Royal Academy of Engineering. This MIT report – The Future of Nuclear in a carbon constrained world – has much useful to say about the economics of nuclear power now and about the prospects for new reactor types. For the need for negative emissions technologies in scenarios that keep global warming below 2° C, see Gasser et al.

    If new nuclear doesn’t get built, it will be fossil fuels, not renewables, that fill the gap

    The UK’s programme to build a new generation of nuclear power stations is in deep trouble. Last month, Hitachi announced that it is pulling out of a project to build two new nuclear power stations in the UK; Toshiba had already announced last year that it was pulling out of the Moorside project.

    The reaction to this news has been largely one of indifference. In one sense this is understandable – my own view is that it represents the inevitable unravelling of an approach to nuclear new build that was monumentally misconceived in the first place, maximising costs to the energy consumer while minimising benefits to UK industry. But many commentators have taken the news to indicate that nuclear power is no longer needed at all, and that we can achieve our goal of decarbonising our energy economy entirely on the basis of renewables like wind and solar. I think this argument is wrong. We should accelerate the deployment of wind and solar, but this is not enough for the scale of the task we face. The brutal fact is that if we don’t deploy new nuclear, it won’t be renewables that fill the gap, but more fossil fuels.

    Let’s recall how much energy the UK actually uses, and where it comes from. In 2017, we used just over 2200 TWh. The majority of the energy we use – 1325 TWh – is in the form of directly burnt oil and gas. 730 TWh of energy inputs went in to produce the 350 TWh of electricity we used. Of that 350 TWh, 70 TWh came from nuclear, 61.5 TWh came from wind and solar, and another 6 TWh from hydroelectricity. Right now, our biggest source of low carbon electricity is nuclear energy.

    But most of that nuclear power currently comes from the ageing fleet of Advanced Gas Cooled reactors. By 2030, all of our AGRs will be retired, leaving only Sizewell B’s 1.2 GW of capacity. In 2017, the AGRs generated a bit more than 60 TWh – by coincidence, almost exactly the same amount of electricity as the total from wind and solar.

    The growth in wind and solar power in the UK in recent years has been tremendous – but there are two things we need to stress. Firstly, taking out the existing nuclear AGR fleet – as has to happen over the next decade – would entirely undo this progress, without nuclear new build. Secondly, in the context of the overall scale of the challenge of decarbonisation, the contribution of both nuclear and renewables to our total energy consumption remains small – currently less than 16%.

    One very common response to this issue is to point out that the cost of renewables has now fallen so far that at the margin, it’s cheaper to bring new renewable capacity online than to build new nuclear. But this argument from marginal cost is only valid if you are only interested in marginal changes. If we’re happy with continuing to get around 80% of our energy from fossil fuels, then the marginal cost argument makes sense. But if we’re serious about making real progress towards decarbonisation – and I think the urgency of the climate change issue and the scale of the downside risks means we should be – then what’s important isn’t the marginal cost of low-carbon energy, but the whole system cost of replacing, not a few percent, but close to 100% of our current fossil fuel use.

    So how much more wind and solar energy capacity can we realistically expect to be able to build? The obvious point here is that the total amount is limited – the UK is a small, densely populated, and not very sunny island – even in the absence of economic constraints, there are limits to how much of it can be covered in solar cells. And although its position on the fringes of the Atlantic makes it a very favourable location for offshore wind, there are not unlimited areas of the relatively shallow water that current offshore wind technology needs.

    Currently, the current portfolio of offshore wind projects amounts to a capacity of 33.2 GW, with one further round of 7 GW planned. According to the most recent information I can find, “Industry says it could deliver 30GW installed by 2030”. If we assume the industry does a bit better than this, and delivers the entire current portfolio, that would produce about 120 TWh a year.

    Solar energy produced 11.5 TWh in 2017. The very fast rate of growth that led us to that point has levelled off, due to changes in the subsidy regime. Nonetheless, there’s clearly room for further expansion, both of rooftop solar and grid scale installations. The most aggressive of the National Grid scenarios envisages a tripling of solar by 2030, to 32 TWh.

    Thus by 2030, in the best case for renewables, wind and solar produce about 150 TWh of electricity, compared to our current total demand for electricity of 350 TWh. We can reasonably expect demand for electricity, all else equal, to slowly decrease as a result of efficiency measures. Estimating this by the long term rate of reduction of energy demand of 2% a year, we might hope to drive demand down to around 270 TWh by 2030. Where does that leave us? With all the new renewables, together with nuclear generation at its current level, we’d be generating 220 TWh out of 270 TWh. Adding on some biomass generation (currently about 35 TWh, much of which comes from burning environmentally dubious imported wood-chips), 6 TWh of hydroelectricity and some imported French nuclear power, and the job of decarbonising our electricity supply is nearly done. What would we do without the 70 TWh of nuclear power? We’d have to keep our gas-fired power stations running.

    But, but, but… most of the energy we use isn’t in the form of electricity – it’s directly burnt gas and oil. So if we are serious about decarbonising the whole energy system, we need to be reducing that massive 1325 TWh of direct fossil fuel consumption. The most obvious way of doing that is by shifting from directly burning oil to using low-carbon electricity. This means that to get anywhere close to deep decarbonisation we are going to need to increase our consumption of electricity substantially – and then increase our capacity for low-carbon generation to match.

    This is one driving force for the policy imperative to move away from internal combustion engines to electric vehicles. Despite the rapid growth of electric vehicles, we still use less than 0.2 TWh charging our electric cars. This compares with a total of 4.8 TWh of electricity used for transport, mostly for trains (at this point we should stop and note that we really should electrify all our mainline and suburban train-lines). But these energy totals are dwarfed by the 830 TWh of oil we burn in cars and trucks.

    How rapidly can we expect to electrify vehicle transport? This is limited by economics, by the world capacity to produce batteries, by the relatively long lifetime of our vehicle stock, and by the difficulty of electrifying heavy goods vehicles. The most aggressive scenario looked at by the National Grid suggests electric vehicles consuming 20 TWh by 2030, a more than one-hundred-fold increase on today’s figures, representing 44% a year growth compounded. Roughly speaking, 1 TWh of electricity used in an electric vehicle displaces 3.25 TWh of oil – electric motors are much more efficient at energy conversion than internal combustion engines. So even at this aggressive growth rate, electric vehicles will only have displaced 8% of the oil burnt for transport. Full electrification of transport would require more than 250 TWh of new electricity generation, unless we are able to generate substantial new efficiencies.

    Last, but not least, what of the 495 TWh of gas we burn directly, to heat our homes and hot water, and to drive industrial processes? A serious programme of home energy efficiency could make some inroads into this, we could make more use of ground source heat pumps, and we could displace some with hydrogen, generated from renewable electricity (which would help overcome the intermittency problem) or (in the future, perhaps) process heat from high temperature nuclear power stations. In any case, if we do decarbonise the domestic and industrial sectors currently dominated by natural gas, several hundred more TWh of electricity will be required.

    So achieve the deep decarbonisation we need by 2050, electricity generation will need to be more than doubled. Where could that come from? A further doubling of solar energy from our already optimistic 2030 estimate might take that to 60 TWh. Beyond that, for renewables to make deep inroads we need new technologies. Marine technologies – wave and tide – have potential, but in terms of possible capacity deep offshore wind perhaps offers the biggest prize, with the Scottish Government estimating possible capacities up to 100 GW. But this is a new and untried technology, which will certainly be very much more expensive than current offshore wind. The problem of intermittency also substantially increases the effective cost of renewables at high penetrations, because of the need for large scale energy storage and redundancy. I find it difficult to see how the UK could achieve deep decarbonisation without a further expansion of nuclear power.

    Coming back to the near future – keeping decarbonisation on track up to 2030 – we need to bring at least enough new nuclear on stream to replace the lost generation capacity of the AGR fleet, and preferably more, while at the same time accelerating the deployment of renewables. We need to be honest with ourselves about how little of our energy currently comes from low-carbon sources; even with the progress that’s been made deploying renewable electricity, most of our energy still arises from directly burning oil and gas. If we’re serious about decarbonisation, we need the rapid deployment of all low carbon energy sources.

    And yet, our current policy for nuclear power is demonstrably failing. How should we do things differently, more quickly and at lower cost, to reboot the UK’s nuclear new build programme? That will be the subject of another post.

    Notes on sources.
    Current UK energy statistics are from the 2018 edition of the Digest of UK Energy Statistics.
    Status of current and planned offshore wind capacity, from Crown Estates consultation.
    National Grid future energy scenarios.
    Oil displaced by electric vehicles – current estimates based on worldwide data, as reported by Bloomberg New Energy Finance.

    How inevitable was the decline of the UK’s Engineering industry?

    My last post identified manufacturing as being one of three sectors in the UK which combined material scale relative to the overall size of the economy with a long term record of improving total factor productivity. Yet, as us widely known, manufacturing’s share of the economy has been in long term decline, from 27% in 1970 to 10.6% in 2014. Manufacturing’s share of employment has fallen even further, as a consequence of its above-average rate of improvement in labour productivity. This fall in importance of manufacturing has been a common feature of all developed economies, yet the UK has seen the steepest decline.

    This prompts two questions – was this decline inevitable, and does it matter? A recent book by industry veteran Tom Brown – Tragedy and Challenge: an inside view of UK Engineering’s Decline and the Challenge of the Brexit Economy, makes a strong argument that this decline wasn’t inevitable, and that it does matter. It’s a challenge to conventional wisdom, but one that’s rooted in deep experience. Brown is hardly the first to identify as the culprits the banks, fund managers, and private equity houses collectively described as “the City” – but his detailed, textured description of the ways in which these institutions have exerted their malign influence makes a compelling charge sheet against the UK economy’s excessive financialisation.

    Brown’s focus is not on the highest performing parts of manufacturing – chemicals, pharmaceuticals and aerospace – but on what he describes as the backbone of the manufacturing sector – medium technology engineering companies, usually operating business-to-business, selling the components of finished products in highly competitive, international supply chains. The book is a combination of autobiography, analysis and polemic. The focus of the book reflects Brown’s own experience managing engineering firms in the UK and Europe, and it’s his own personal reflections that provide a convincing foundation for his wider conclusions.

    His analysis rehearses the decline of the UK’s engineering sector, pointing to the wider undesirable consequences of this decline, both at the macro level, in terms of the UK’s overall declining productivity growth and its worsening balance of payments position, and at the micro level. He is particularly concerned by the role of the decline of manufacturing in hollowing out the mid-level of the jobs market, and exacerbating the UK’s regional inequality. He talks about the development of a “caste system of the southern Brahmins, who can’t be expected to leave the oxygen of London, and the northern Untouchables who should consider themselves lucky just to have a job”.

    This leads on to his polemic – that the decline of the UK’s engineering firms was not inevitable, and that its consequences have been regrettable, severe, and will be difficult to reverse.

    Brown is not blind to the industry’s own failings. Far from it – the autobiographical sections make clear what he saw was wrong with the UK’s engineering industry at the beginning of his career. The quality of management was terrible and industrial relations were dreadful; he’s clear that, in the 1970’s, the unions hastened the industry’s decline. But you get the strong impression that he believes management and unions at the time deserved each other, and a chronic lack of investment in new plant and machinery, and a complete failure to develop the workforce led to a severe loss of competitiveness.

    The union problem ended with Thatcher, but the decline continued and accelerated. Like many others, Brown draws an unfavourable comparison between the German and British traditions of engineering management. We hear a lot about the Mittelstand, but it’s really helpful to see in practise what the cultural and practical differences are. For example, Brown writes “German managers tend to be concerned about their people, and far slower to lay off in a downturn. Their training of both management and shop-floor employees is vastly better than the UK… in contrast many UK employees have expected skilled people to be available on demand, and if they fired them then they could rehire at will like the gaffer in the old ship yards”.

    For Brown, its no longer the unions that are the problem – it’s the City. It’s fair to say that he takes a dim view of the elevated position of the Financial Services sector since the Big Bang – “Overall the City is a major source of problems – to UK engineering, and to society as a whole. Much that has happened there is crazy, and still is. Many of our brightest and best have been sucked in and become personally corrupted.”

    But where his book adds real value is in going beyond the rhetoric to fill out the precise details of exactly how the City serves engineering firms so badly. To Brown, the fund managers and private equity houses that exert control over firms dictate strategies to the firms that are usually pretty much the opposite of what would be required for them to achieve long-term growth. Investment in new plant and equipment is starved due to an emphasis on short-term results, and firms are forced into futile mergers and acquisitions activity, which generate big fees for the advisors but are almost always counterproductive for the long-term sustainability of the firms, because they force them away from developing long-term, focused strategies. These criticisms echo many made by John Kay in his 2012 report, which Brown cites with approval, combined with disappointment that so few of the recommendations have been implemented.

    “I do not suffer fools gladly”, says Brown, a comment which sets the tone for his discussion of the fund management industry. While he excoriates fund managers for their lack of diligence and technical expertise, he condemns the lending banks for outright unethical and predatory behaviour, deliberately driving distressed companies into receivership, all the time collecting fees for themselves and their favoured partners, while stiffing the suppliers and trade creditors. The well-publicised malpractice of RBS’s “Global Restructuring Group” offers just one example.

    One very helpful section of the book discusses the way Private Equity operates. Brown makes the very important point that not enough people understand the difference between Venture Capital and Private Equity. The former, Brown believes, represents technically sophisticated investors creating genuine new value –
    “investing real equity, taking real risks, and creating value, not just transferring it”.

    But what too many politicians, and too much of the press fail to realise is that genuine Venture Capital in the UK is a very small sector – in 2014, only £0.3 billion out of a total £4.3 billion invested by BVCA members fell into this category. Most of the investment is Private Equity, in which the investments are in existing assets.

    “The PE houses’ basic model is to buy companies as cheaply as possible, seek to “enhance” them, and then sell them for as much as possible in only three years’ time, so it is extremely short-termist. They “invest” money in buying the shares of these companies from the previous owners, but they invest as little as possible into the actual companies themselves; this crucial distinction is often completely misunderstood by the government and the media who applied the PE houses for the billions they are “investing” in British industry… in fact, much more cash is often extracted from these companies in dividends than is ever invested in them”.

    To Brown, much Private Equity is simply a vehicle for large scale tax avoidance, through eliding the distinction between debt and equity in “complex structures that just adhere to the letter of the law”. These complex structures of ownership and control lead to a misalignment of risk and reward – when their investments fail, as they often do, the PE houses get some of their investment back as it is secured debt, while trade suppliers, employees and the taxpayer get stiffed.

    To be more positive, what does Brown regard as the ingredients for success for an engineering firm? His list includes:

  • an international outlook, stressing the importance of being in the most competitive markets to understand your customers and the directions of the wider industry;
  • a long-term vision for growth, stressing innovation, R&D, and investment in latest equipment;
  • conservative finance, keeping strong balance sheet to avoid being knocked off course by the inevitable ups and downs of the markets, allowing the firm to keep control of its own destiny;
  • a focus on the quality of people – with managements who understand engineering and are not just from a financial background, and excellent training for the shop floor workers.
  • The book focuses on manufacturing and engineering, but I suspect many of its lessons have a much wider applicability. People interested in economic growth and industrial strategy necessarily, and rightly, focus on statistics, but this book offers an invaluable additional dimension of ground truth to these discussions.

    What drives productivity growth in the UK economy?

    How do you get economic growth? Economists have a simple answer – you can put in more labour, by having more people working for longer hours, or you can put in more capital, building more factories or buying more machines, or – and here things get a little more sketchy – you can find ways of innovating, of getting more outputs out of the same inputs. In the framework economists have developed for thinking about economic growth, the latter is called “total factor productivity”, and it is loosely equated with technological progress, taking this in its broadest sense. In the long run it is technological progress that drives improved living standards. Although we may not have a great theoretical handle on where total factor productivity comes from, its empirical study should tell us something important about the sources of our productivity growth. Or, in our current position of stagnation, why productivity growth has slowed down so much.

    Of course, the economy is not a uniform thing – some parts of it may be showing very fast technological progress, like the IT industry, while other parts – running restaurants, for example, might show very little real change over the decades. These differences emerge from the sector based statistics that have been collected and analysed for the EU countries by the EU KLEMS Growth and Productivity Accounts database.

    Sector percentage of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015. Data from EU KLEMS Growth and Productivity Accounts database.

    Here’s a very simple visualisation of some key results of that data set for the UK. For each sector, the relative importance of the sector to the economy as a whole is plotted on the x-axis, expressed as a percentage of the gross value added of the whole economy. On the y-axis is plotted the total change in total factor productivity over the whole 17 year period covered by the data. This, then, is the factor by which that sector has produced more output than would be expected on the basis of additional labour and capital. This may tell us something about the relative effectiveness of technological progress in driving productivity growth in each of these sectors.

    Broadly, one can read this graph as follows: the further right a sector is, the more important it is as a proportion of the whole economy, while the nearer the top a sector is, the more dynamic its performance has been over the 17 years covered by the data. Before a more detailed discussion, we should bear in mind some caveats. What goes into these numbers are the same ingredients as go into the measurement of GDP as a whole, so all the shortcomings of that statistic are potentially issues here.

    A great starting point for understanding these issues is Diane Coyle’s book GDP: a brief but affectional history. The first set of issues concern what GDP measures and what it doesn’t measure. Lots of kinds of activity are important for the economy, but they only tend to count in GDP if money changes hands. New technology can shift these balances – if supermarkets replace humans at the checkouts by machines, the groceries still have to be scanned, but now the customer is doing the work for nothing.

    Then there are some quite technical issues about how the measurements are done. This includes properly accounting for improvements in quality where technology is advancing very quickly; failing to fully account for the increased information transferred through a typical internet connection will mean that overall inflation will be overestimated, and productivity gains in the ICT will be understated (see e.g. A Comparison of Approaches to Deflating Telecoms Services Output, PDF). For some of the more abstract transactions in the modern economy – particularly in the banking and financial services sector, some big assumptions have to be made about where and how much value is added. For example, the method used to estimate the contribution of financial services – FISIM, for “Financial intermediation services indirectly measured” – has probably materially overstated the contribution of financial services to GDP by not handling risk correctly, as argued in this recent ONS article.

    Finally, there’s the big question of whether increases in GDP correspond to increases in welfare. The general answer to this question is, obviously, not necessarily. Unlike some commentators, I don’t take this to mean that we shouldn’t take any notice of GDP – it is an important indicator of the health of an economy and its potential to supply people’s needs. But it does need looking at critically. A glazing company that spent its nights breaking shop windows and its days mending them would be increasing GDP, but not doing much for welfare – this is a ridiculous example, but there’s a continuum between what economist William Baumol called unproductive entrepreneurship, the more extractive varieties of capitalism documented by Acemoglu and Robinson – and outright organised crime.

    To return to our plot, we might focus first on three dynamic sectors – information and communications, manufacturing, and professional, scientific, technical and admin services. Between them, these sectors account for a bit more than a quarter of the economy, and have shown significant improvements in total factor productivity over the period. In this sense it’s been ICT, manufacturing and knowledge-based services that have driven the UK economy over this period.

    Next we have a massive sector that is important, but not yet dynamic, in the sense of having demonstrated slightly negative total factor productivity growth over the period. This comprises community, personal and social services – notably including education, health and social care. Of course, in service activities like health and social care it’s very easy to mischaracterise as a lowering of productivity a change that actually corresponds to an increase in welfare. On the other hand, I’ve argued elsewhere that we’ve not devoted enough attention to the kinds of technological innovation in health and social care sectors that could deliver genuine productivity increases.

    Real estate comprises a sector that is both significant in size, and has shown significant apparent increases in total factor productivity. This is a point at which I think one should question the nature of the value added. A real estate business makes money by taking a commission on property transactions; hence an increase in property prices, given constant transaction volume, leads to an apparent increase in productivity. Yet I’m not convinced that a continuous increase in property prices represents the economy generating real value for people.

    Finance and insurance represents a significant part of the economy – 7% – but its overall long term increase in total factor productivity is unimpressive, and probably overstated. The importance of this sector in thinking about the UK economy represents a distortion of our political economy.

    The big outlier at the bottom left of the plot is mining and quarrying, whose total factor productivity has dropped by 50% – what isn’t shown is that its share of the economy has substantially fallen over the period too. The biggest contributor to this sector is North Sea oil, whose production peaked around 2000 and which has since been rapidly falling. The drop in total factor productivity does not, of course, mean that technological progress has gone backwards in this sector. Quite the opposite – as the easy oil fields are exhausted, more resource – and better technology – are required to extract what remains. This should remind us of one massive weakness in GDP as a sole measure of economic progress – it doesn’t take account of the balance sheet, of the non-renewable natural resources we use to create that GDP. The North Sea oil has largely gone now and this represents an ongoing headwind to the UK economy that will need more innovation in other sectors to overcome.

    This approach is limited by the way the economy needs to be divided up into sectors. Of course, this sectoral breakdown is very coarse – within each sector there are likely to be outliers with very high total productivity growth which dramatically pull up the average of the whole sector. More fundamentally, it’s not obvious that the complex, networked nature of the modern economy is well captured by these rather rigid barriers. Many of the most successful manufacturing enterprises add big value to their products with the services that come attached to them, for example.

    We can look into the EU Klems data at a slightly finer grained level; the next plot shows importance and dynamism for the various subsectors of manufacturing. This shows well the wide dispersions within the overall sectors – and of course within each of these subsectors there will be yet more dispersion.

    Sub-sector fraction of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015 for subsectors of manufacturing. Data from EU KLEMS Growth and Productivity Accounts database.

    The results are perhaps unsurprising – areas traditionally considered part of high value manufacturing – transport equipment and chemicals, which include aerospace, automotive, pharmaceuticals and speciality chemicals – are found in the top right quadrant, important in terms of their share of the economy, dynamic in terms of high total factor productivity growth. The good total factor productivity performance of textiles is perhaps more surprising, for an area often written off as part of our industrial heritage. It would be interesting to look in more detail at what’s going on here, but I suspect that a big part of it could be the value that can be added by intangibles like branding and design. Total factor productivity is not just about high tech and R&D, important though the latter is.

    Clearly this is a very superficial look at a very complicated area. Even within the limitations of the EU Klems data set, I’ve not considered how rates of TFP growth have varied by time – before and after the global financial crisis, for example. Nor have I considered the way shifts between sectors have contributed to overall changes in productivity across the economy – I’ve focused only on rates, not on starting levels. And of course, we’re talking here about history, which isn’t always a good guide to the future, where there will be a whole new set of technological opportunities and competitive challenges. But as we start to get serious about industrial strategy, these are the sorts of questions that we need to be looking into.