On being endorsed by Dominic Cummings

The former chief advisor to the Prime Minister, Dominic Cummings, wrote a blogpost yesterday about the need for leave voters to mobilise to make sure the Conservatives are elected on the 12 December. At the end of the post, he writes “Ps. If you’re interested in ideas about how the new government could really change our economy for the better, making it more productive and fairer, you’ll find this paper interesting. It has many ideas about long-term productivity, science, technology, how to help regions outside the south-east and so on, by a professor of physics in Sheffield”. He’s referring to my paper “A Resurgence of the Regions: rebuilding innovation capacity across the whole UK”.

As I said on Twitter,“Pleased (I think) to see my paper “Resurgence of the regions” has been endorsed in Dominic Cummings’s latest blog. Endorsement not necessarily reciprocated, but all parties need to be thinking about how to grow productivity & heal our national divides”.

I provided a longer reaction to a Guardian journalist, which resulted in this story today: Academic praised by Cummings is remain-voting critic of Tory plans. Here are the comments I made to the journalist which formed the basis of the story:

I’m pleased that Dominic Cummings has endorsed my paper “Resurgence of the regions”. I think the analysis of the UK’s current economic weaknesses is important and we should be talking more about it in the election campaign. I single out the terrible record of productivity growth since the financial crisis, the consequences of that in terms of flat-lining wages, the role of the weak economy in the fiscal difficulties the government has in balancing the books, and (as others have done) the profound regional disparities in economic performance across the country. I’d like to think that Cummings shares this analysis – the persistence of these problems, though, is hardly a great endorsement for the last 9.5 years of Conservative-led government.

In response to these problems we’re going to need some radical changes in the way we run our economy. I think science and innovation is going to be important for this, and clearly Cummings thinks that too. I also offer some concrete suggestions for how the government needs to be more involved in driving innovation – especially in the urgent problem we have of decarbonising our energy supply to meet the target of net zero greenhouse gas emissions by 2050. It’s good that the Conservative Party has signed up to a 2050 Net Zero Greenhouse Gas target, but the scale of the measures it proposes are disappointingly timid – as I explain in my paper, reaching this goal is going to take much more investment, and more direct state involvement in driving innovation to increase the scale and drive the cost down of low carbon energy. This needs to be a central part of a wider industrial strategy.

I welcome all three parties’ commitment to raise the overall R&D intensity of the economy (to 2.4% of GDP by 2027 for the Conservatives, 3% of GDP by 2030 for Labour, 2.4% by 2027 with longer term aspiration for 3% for the Lib Dems). The UK’s poor record of R&D investment compared to other developed countries is surely a big contributing factor to our stagnating productivity. But this is also a stretching target – we’re currently at 1.7%. It’s going to need substantial increases in public spending, but even bigger increases in R&D investment from the private sector, and we’re going to need to see much more concrete plans for how government might get this might happen. Again, my paper has some suggestions, with a particular focus on building new capacity in those parts of the country where very little R&D gets done – and which, not coincidentally, have the worst economic performance (Wales, Northern Ireland, the North of England in particular).

As for Cummings’s views on Brexit: I voted remain, not least because I thought that a “leave” vote would result in a period of very damaging political chaos for the UK. I can’t say that subsequent events have made me think I was wrong on that. I do think that it would be possible for the UK to do ok outside the EU, but to succeed post-Brexit we’ll need to stay close to Europe in matters such as scientific cooperation (preferably through associating with EU science programmes like the European Research Council),and in matters related to nuclear technology. We will need to be a country that welcomes talented people from overseas, and provides an attractive destination for overseas investment – particularly important for innovation, where more than half of the UK’s business R&D is done by overseas owned firms. The need to have a close relationship with our major trading partners will mean that we’ll need to stay in regulatory alignment with the EU (very important, for example, for the chemicals industry) and minimise frictions for industries, like the automotive industry where the UK is closely integrated into European supply chains, and in the high value knowledge based services which are so important for the UK economy. It doesn’t look like that’s the direction of travel the Conservatives are currently going down.

Whatever happens in the next election, anyone who has any ambition to heal the economic and social divides in this country needs to be thinking about the issues I raise in my paper.

Rock climbing and the economics of innovation

The rock climber Alex Honnold’s free, solo ascent of El Capitan is inspirational in many ways. For economist John Cochrane, watching the film of the ascent has prompted a blogpost: “What the success of rock climbing tells us about economic growth”. He concludes that “Free Solo is a great example of the expansion of ability, driven purely by advances in knowledge, untethered from machines.” As an amateur in both rock climbing and innovation theory, I can’t resist some comments of my own. I think it’s all a bit more complicated than Cochrane thinks. In particular his argument that Honnold’s success tells us that knowledge – and the widespread communication of knowledge – is more important than new technology in driving economic growth doesn’t really stand up.

The film “Free Solo” shows Honnold’s 2017 ascent of the 3000 ft cliff El Capitan, in the Yosemite Valley, California. The climb was done free (i.e. without the use of artificial aids like pegs to make progress), and solo – without ropes or any other aids to safety. How come, Cochrane asks, rock climbers have got so much better at climbing since El Cap’s first ascent in 1958, which took 47 days, done with “siege tactics” and every artificial aid available at the time? “There is essentially no technology involved. OK, Honnold wears modern climbing boots, which have very sticky rubber. But that’s about it. And reasonably sticky rubber has been around for a hundred years or so too.”

Hold on a moment here – no technology? I don’t think the history of climbing really bears this out. Even the exception that Cochrane allows, sticky rubber boots, is more complicated than he thinks.

When the modern sport of climbing began, more than a hundred years ago, people wore boots – nailed boots – on their feet (as they would do for pretty much any outdoor activity). There is a lost technology of the best types of nails and nailing patterns to use, but it’s true that, as harder climbs were done in the 1920s and 30s, the leading climbers of the day tended to use tennis shoes or plimsolls for the hardest climbs. But these were everyday footwear, in no way designed for climbing.

I believe the first shoes designed specifically for rock climbing, of the kind that would be recognised as the ancestors of today’s shoes, came from France. These were designed by the alpinist Pierre Allain for use on the sandstone boulders of the Fontainbleau forest, a favoured training ground for the climbers of Paris. By the time I started climbing, in the 1970’s, the descendants of these shoes – the EB Super Gratton- had an almost complete worldwide monopoly on climbing shoes. They were characterised by a tight fit, a treadless rubber sole and a wide rand, allowing precise footwork and good friction on dry rock.

In 1982 the makers of EBs made a “New Coke” like marketing blunder, introducing a new model with a moulded sole – probably cheaper to manufacture, but thicker and less precise than the original. This might not have mattered given their existing market position, but a then unheard of Spanish shoe company – Boreal – had recently introduced a model of their own, with a sole made of a new kind of high friction rubber.

Rubber is a strange material, and the microscopic origins of friction in rubber are different to those in more conventional materials like metals. When a climber steps on a tiny foothold, the sole starts to slide against the rock, very slowly, usually imperceptibly. As the rubber slides past the asperities, the internal motions within the bulk of the rubber, of molecule against molecule, dissipate energy – and the greater the rate of energy dissipation, the higher the friction. This energy dissipation, though, is a very strongly peaked function of temperature – and as a consequence, a given rubber compound will have a temperature at which the friction is at a maximum.

Boreal, by accident or design, had found a rubber compound where the friction peaked much closer to room temperature than in EBs. Boreal’s new climbing boot – the “Firé” – swept the marketplace. The increased friction and the advantage this gave was obvious both to the leading climbers of the day, and the much more average performers. I was in the latter category, and succumbed to the trend. The improvement in performance the new shoes made possible was immediately tangible, the only downside being that Firés were cripplingly uncomfortable. Soon US and Italian competitors started selling boots with comparably high friction rubber that were actually foot-shaped.

Modern rock boots do make a difference, but this isn’t really the crucial technology that has enabled hard rock climbing. What’s made the biggest difference – both to the wider popularity of the sport and the achievements of its leading proponents – has been the development of technologies that allow one to fall off without dying.

Hold on, you might say here – wasn’t Alex Honnold climbing solo, without ropes, in a situation in which if he fell he would most certainly die? Yes, indeed, but Honnold didn’t get to be a good climber by doing a lot of soloing, he got to be a good soloist by doing a lot of climbing. Most of that climbing – especially the climbing where he was pushing himself – was done roped. To get himself ready for his El Cap solo, he spent hundreds of hours on the route, roped, working out and memorising all the moves.

When climbing started, every climb was effectively a solo, at least for the leader. Before the 2nd World War, climbing ropes were made of natural fibres – hemp or manila. They were strong – strong enough to hold a slip of a second on the rope. But they were brittle, and for the leader, any fall that would put a shock load on the rope was likely to break it. “The leader must not fall” was the stern instruction of books of the time. The knowledge that a fall would lead to death or serious injury was ever-present for a pre-war climber pioneering a new hard route, and it’s not difficult to imagine that this was a brake on progress.

As in other areas of technology, the war changed things. The new artificial fibre nylon was put into mass production for parachute cord for aircrew and airborne troops; its strength, resilience and elasticity made the postwar surpluses of the fibre ideal for making climbing ropes. Together with the invention of metal snap-links they made it possible to imagine a leader surviving a fall – the rope could be clipped to an anchor in the rock to make a “running belay”, limiting the length of the fall. In the USA and the European Alps, the anchors would usually be metal pegs hammered into cracks, while on the smaller crags of the UK a tradition developed of using jammed machine nuts threaded on loops of nylon..

By the 1960’s and 70’s, the likelihood was that a leader would survive a fall, but you wouldn’t want to do it too often. The job of arresting the fall went to the second, who would pass the rope round their back and use the force of their grip and the friction of the rope around their body to hold the fall. You had to be attentive, quick and decisive to do this without getting a bad friction burn, or at worst letting the rope go entirely. The crudest mechanical friction devices were devised in the early 70’s, and have now been developed to the point that a second no longer needs strength or skill to hold the rope of a falling climber. Meanwhile the leader would be tied on to the rope with a simple knot round the waist, making a fall painful – and a prolonged period of dangling, after a fall from overhanging rock, potentially fatal through asphyxiation. Simple but effective harnesses were developed in the 60’s and 70’s, which spread the force of arresting a fall onto the buttocks and thighs, and made the sudden stop at the end of a leader fall bearable, if not entirely comfortable.

In California, it was the particular character of the rock and the climbs, especially in Yosemite, that drove developments in the technology for anchoring the rope to the rock. Yvon Chouinard realised that the mild steel pegs used in the European Alps weren’t suitable for the hard granite of Yosemite, and he developed optimally shaped pegs from hard chrome-molybdenum alloy steel – the bongs, blades and leepers that I just about remember from my youth. But like other technological developments, this one had its downsides – the repeated placement and removal of these pegs from the cracks led to scarring and damage, which in the climate of heightened environmental awareness in the 60’s and 70’s led to some soul-searching by US climbers. A “clean-climbing” movement developed, with Chouinard himself one of its leaders. To replace steel pegs as anchors, the British tradition of jammed machine nuts as anchors was developed. Purpose designed chocks and wedges were marketed, like Chouinard’s cunningly designed “hexcentrics”, which would cam under load to hold even in parallel sided cracks.

It was another Californian devotee of Yosemite that made the real breakthrough in clean climbing protection, though. Ray Jardine, an aerospace engineer, devised an ingenious spring-loaded camming device that was easily placed and would hold a fall even if placed in a parallel sided or slightly flared crack. These were patented and commercialised as “Friends”. Many developments of this idea have since been put on the market, and these form the basis of the “rack” of anchoring equipment that climbers carry today.

It’s this combination of strong and resilient nylon ropes, able to absorb the energy of a long fall, automatic braking gadgets to hold the rope when a fall happens, reliable devices for anchoring the rope to the rock, and harnesses that spread the load of a fall across the climbers body, that have got us to where we are today, where climbers can practise harder and harder routes, (mostly) safe in the knowledge that a fall won’t be fatal, or even that uncomfortable.

This is not to say that knowledge isn’t important, of course. All this equipment needs skill to use – and knowledge has helped in the sheer physical aspects of getting up steep rock. As well as the new technology, one of the causes of the big advances in rock climbing standards in the 1980’s was undoubtedly a change in attitude amongst leading climbers. Training was taken much more seriously than it had been before: training techniques were imported from athletics and gymnastics, artificial climbing walls were developed, and the discipline of trying out very hard moves close to the ground on boulders – pioneered by the American mathematician and gymnast John Gill – became popular.

I think one kind of knowledge is particularly important in climbing – and maybe in other areas of human endeavour, too. That’s simply the knowledge that something has already been done – the existence proof that a feat is possible. Guidebooks record that a climb has been done and where it goes, though not usually how to do it. To know in advance the physical details of how a climb is done – what climbers call “beta” – is considered to lessen the achievement of a subsequent ascent. But simply to know that the climb is possible (and have some idea of how hard it is going to be) is an important piece of information in itself.

How is knowledge transmitted? We have books – instructional books of technique, and guidebooks to particular climbing areas. And now we have the internet, so one can read and post questions on climbers internet forums. I’m not sure how much this has added to more traditional ways of conveying information – discussions on the most popular UK climbing forum seem to mostly consist of endless arguments about Brexit. But I do think there is one change that modern times have brought that makes a huge difference to knowledge transmission, and that is the advent of cheap air travel.

My first overseas climbing trips (in 1981 and 1982) were to the French Alps. These were hugely important to my development as a climber, and undoubtedly some part of that came from interactions with climbers from other countries with different traditions and different techniques. Big climbing centres tended to have well known places where climbers from different countries stayed and mixed (the squalid informal campsite known as Snell’s Field in the case of Chamonix, the legendary Camp 4 for Yosemite). I climbed with a couple of outstanding Australian climbers from the campsite while I was there, we picked up tips on big wall climbing from a Yosemite habitué, and I came home with half a dozen beautiful titanium ice screws, light, thin walled, and sharp. Such things were unobtainable in the West at the time; I’d bartered them from some East European climbers, but they had undoubtedly been knocked off after hours in some Soviet aircraft factory.

But getting to Chamonix had taken me nearly 24 hours on a bus. Nowadays climbers can take several holidays a year with easy and cheap air travel, to the sunshine in Spain or Greece or Thailand, the big mountains of the Himalayas or South America, desert climbing in Morocco, Jordan, or Oman, Nevada, Utah, or Arizona, to the subarctic conditions of Patagonia or Baffin Island, or to the more traditional centres like the Dolomites or Yosemite. This does lead to a rapid spread of attitudes and techniques. It’s a paradox, of course, that climbers, who love the wilderness and the world’s beautiful places, and are more environmentally conscious than most, make, through their flying, such an above average contribution to climate change. Can this go on?

So if John Cochrane has learnt the wrong lesson from rock climbing, what better lessons should we take away from all this?

Some economists love simple stories, especially when they support their ideological priors, but a bit of knowledge of history often reveals that the truth is somewhat more complicated. More importantly, perhaps, we should remember that technological innovation isn’t just about iPhones and artificial intelligence. All around us – in our homes, in everyday life, in our hobbies and pastimes – we can see, if we care to look, the products of all kinds of technological innovation in products and the materials that make them, that collectively lead to overall economic growth. Technological innovation doesn’t have to be about giant leaps and moonshots – even mundane things like shoe soles and ropes tell a story of a whole series of incremental changes that together add up to progress.

And to return to Alex Honnold, perhaps the most important lesson a free-market loving economist should draw is that sometimes people will do extraordinary things without the motivation of money.

What do we mean by scientific productivity – and is it really falling?

This is the outline of a brief talk I gave as part of the launch of a new Research on Research Institute, with which I’m associated. The session my talk was in was called “PRIORITIES: from data to deliberation and decision-making
. How can RoR support prioritisation & allocation by governments and funders?”

I want to focus on the idea of scientific productivity – how it is defined, and how we can measure it – and whether it is declining – and if it is, what can we do about it?

The output of science increases exponentially, by some measures…

…but what do we get back from that? What is the productivity of the scientific enterprise – the output of the enterprise, as defined by some measure of the output of science per unit input?

It depends on what we think the output of science is, of course.

We could be talking of some measure of the new science being produced and its impact within the scientific community.

But I think many of us – from funders to the wider publics who support that science – might also want to look outside the scientific community. How can we measure the effectiveness with which scientific advances are translated into wider socio-economic goals? As the discourses of “grand challenges” and “mission driven” research become more widely taken up, how will we tell whether those challenges and missions have been met?

There is a gathering sense that the productivity of the global scientific endeavour is declining or running into diminishing returns. A recent article by Michael Nielsen and Patrick Collison asserted that “Science is getting less bang for its buck”, while a group of distinguished economists have answered in the affirmative their own question: “Are ideas getting harder to find?” This connects to the view amongst some economists, that we have seen the best of economic growth and are living in a new age of stagnation.

Certainly the rate of innovation in some science-led industries seems to be slowing down. The combination of Moore’s law and Dennard scaling which brought us exponential growth in computing power in the 80’s and 90’s started to level off around 2004 and has since slowed to a crawl, despite continuing growth in resources devoted to it. Continue reading “What do we mean by scientific productivity – and is it really falling?”

It’s the Industrial that enables the Artisanal

It’s come to this, even here. My village chippy has “teamed up” with a “craft brewery” in the next village to sell “artisanal ales” specially brewed to accompany one’s fish and chips. This prompts me to reflect – is this move from the industrial to the artisanal really a reversion to a previous, better world? I don’t think so – instead, craft beer is itself a product of modernity. It depends on capital equipment that is small scale, but dependent on high technology – on stainless steel, electrical heating and refrigeration, computer powered process control. And its ingredients aren’t locally grown and processed – the different flavours introduced by new hop varieties are the outcome of world trade. What’s going on here is not a repudiation of industrialisation, but its miniaturisation, the outcome of new technologies which erode previous economies of scale.

A craft beer from the Eyam Brewery, on sale at the Toll Bar Fish and Chip Shop, Stoney Middleton, Derbyshire.

Beer was one of the first industrial foodstuffs. In Britain, the domestic scale of early beer making began to be replaced by factory scale breweries in the 18th century, as soon as transport improved enough to allow the distribution of their products beyond their immediate locality. Burton-on-Trent was an early centre, whose growth was catalysed by the opening up of the Trent navigation in 1712. This allowed beer to be transported by water via Hull to London and beyond. By the late 18th century some 2000 barrels a year of Burton beer were being shipped to Baltic ports like Danzig and St Petersburg.

Like other process industries, this expansion was driven by fossil fuels. Coal from the nearby Staffordshire and Derbyshire coalfields provided process heat. The technological innovation of coking, which produced a purer carbon fuel which burnt without sulphur containing fumes, was developed as early as 1640 in Derby, so coal could be used to dry malt without introducing off-flavours (this use of coke long predated its much more famous use as a replacement for charcoal in iron production).

By late 19th century, Burton on Trent had become a world centre of beer brewing, producing more than 500 million litres a year, for distribution by the railway network throughout the country and export across the world. This was an industry that was fossil fuel powered and scientifically managed. Coal powered steam engines pumped the large volumes of liquid around, steam was used to provide controllable process heat, and most crucially the invention of refrigeration was the essential enabler of year-round brewing, allowing control of temperature in the fermentation process, by-now scientifically understood by the cadre of formally trained chemists employed by the breweries. In a pint of Marston’s Pedigree or a bottle of Worthington White Shield, what one is tasting is the outcome of the best of 19th century food industrialisation, the mass production of high quality products at affordable prices.

How much of the “craft beer revolution” is a departure from this industrial past? The difference is one of scale – steam engines are replaced by electric pumps, coal fired furnaces by heating elements, and master brewers by thermostatic control systems. Craft beer is not a return to preindustrial, artisanal age – instead it’s based on industrial techniques, miniaturised with new technology, and souped up by the products of world trade. This is a specific example of a point more generally made in Rachel Laudan’s excellent book “Cuisine and Empire” – so-called artisanal food comes after industrial food, and is in fact enabled by it.

What more general lessons can we learn from this example? The energy economy is another place where some people are talking about a transition from a system that is industrial and centralised to one that is small scale and decentralised – one might almost say “artisanal”. Should we be aiming for a new decentralised energy system – a world of windmills and solar cells and electric bikes and community energy trusts?

To some extent, I think this is possible and indeed attractive, leading to a greater sense of control and involvement by citizens in the provision of energy. But we should be under no illusions – this artisanal also has to be enabled by the industrial. Continue reading “It’s the Industrial that enables the Artisanal”

A Resurgence of the Regions: rebuilding innovation capacity across the whole UK

The following is the introduction to a working paper I wrote while recovering from surgery a couple of months ago. This brings together much of what I’ve been writing over the last year or two about productivity, science and innovation policy and the need to rebalance the UK’s innovation system to increase R&D capacity outside London and the South East. It discusses how we should direct R&D efforts to support big societal goals, notably the need to decarbonise our energy supply and refocus health related research to make sure our health and social care system is humane and sustainable. The full (53 page) paper can be downloaded here.

We should rebuild the innovation systems of those parts of the country outside the prosperous South East of England. Public investments in new translational research facilities will attract private sector investment, bring together wider clusters of public and business research and development, institutions for skills development, and networks of expertise, boosting innovation and leading to productivity growth. In each region, investment should be focused on industrial sectors that build on existing strengths, while exploiting opportunities offered by new technology. New capacity should be built in areas like health and social care, and the transition to low carbon energy, where the state can use its power to create new markets to drive the innovation needed to meet its strategic goals.

This would address two of the UK’s biggest structural problems: its profound disparities in regional economic performance, and a research and development intensity – especially in the private sector and for translational research – that is low compared to competitors. By focusing on ‘catch-up’ economic growth in the less prosperous parts of the country, this plan offers the most realistic route to generating a material change in the total level of economic growth. At the same time, it should make a major contribution to reducing the political and social tensions that have become so obvious in recent years.

The global financial crisis brought about a once-in-a-lifetime discontinuity in the rate of growth of economic quantities such as GDP per capita, labour productivity and average incomes; their subsequent decade-long stagnation signals that this event was not just a blip, but a transition to a new, deeply unsatisfactory, normal. A continuation of the current policy direction will not suffice; change is needed.

Our post-crisis stagnation has more than one cause. Some sources of pre-crisis prosperity have declined, and will not – and should not – come back. North Sea oil and gas production peaked around the turn of the century. Financial services provided a motor for the economy in the run-up to the global financial crisis, but this proved unsustainable.

Beyond the unavoidable headwinds imposed by the end of North Sea oil and the financial services bubble, the wider economy has disappointed too. There has been a general collapse in total factor productivity growth – the economy is less able to create higher value products and services from the same inputs than in previous decades. This is a problem of declining innovation in its broadest sense.

There are some industry-specific issues. The pharmaceutical industry, for example, has been the UK’s leading science-led industry, and a major driver of productivity growth before 2007; this has been suffering from a world-wide malaise, in which lucrative new drugs seem harder and harder to find.

Yet many areas of innovation are flourishing, presenting opportunities to create new, high value products and services. It’s easy to get excited about developments in machine learning, the ‘internet of things’ and ‘Industrie 4.0’, in biotechnology, synthetic biology and nanotechnology, in new technologies for generating and storing energy.

But the productivity data shows that UK companies are not taking enough advantage of these opportunities. The UK economy is not able to harness innovation at a sufficient scale to generate the economic growth we need.

Up to now, the UK’s innovation policy had been focused on academic science. We rightly congratulate ourselves on the strength of our science base, as measured by the Nobel prizes won by UK-based scientists and the impact of their publications.

Despite these successes, the UK’s wider research and development base suffers from three faults:
• It is too small for the size of our economy, as measured by R&D intensity,
• It is particularly weak in translational research and industrial R&D,
• It is too geographically concentrated in the already prosperous parts of the country.

Science policy has been based on a model of correcting market failure, with an overwhelming emphasis on the supply side – ensuring strong basic science and a supply of skilled people. We need to move from this ‘supply side’ science policy to an innovation policy that explicitly creates demand for innovation, in order to meet society’s big strategic goals.

Historically, the main driver for state investment in innovation has been defence. Today, the largest fraction of government research and development supports healthcare – yet this is not done in a way that most effectively promotes either the health of our citizens or the productivity of our health and social care system.

Most pressingly, we need innovation to create affordable low carbon energy. Progress towards decarbonising our energy system is not happening fast enough, and innovation is needed to decrease the price of low carbon energy and increase its scale, and increase energy efficiency.

More attention needs to be paid to the wider determinants of innovation – organisation, management quality, skills, and the diffusion of innovation as much as discovery itself. We need to focus more on the formal and informal networks that drive innovation – and in particular on the geographical aspects of these networks. They work well in Cambridge – why aren’t they working in the North East or in Wales?

We do have examples of new institutions that have catalysed the rebuilding of innovation systems in economically lagging parts of the country. Translational research institutions such as Coventry’s Warwick Manufacturing Group, and Sheffield’s Advanced Manufacturing Research Centre, bring together university researchers and workers from companies large and small, help develop appropriate skills at all levels, and act as a focus for inward investment.

These translational research centres offer models for new interventions that will raise productivity levels in many sectors – not just in traditional ‘high technology’ sectors, but also in areas of the foundational economy such as social care. They will drive the innovation needed to create an affordable, humane and effective healthcare system. We must also urgently reverse decades of neglect by the UK of research into new sustainable energy systems, to hasten the overdue transition to a low carbon economy. Developing such centres, at scale, will do much to drive economic growth in all parts of the country.

Continue to read the full (53 page) paper here (PDF).

Rebooting the UK’s nuclear new build programme

80% of our energy comes from burning fossil fuels, and that needs to change, fast. By the middle of this century we need to be approaching net zero carbon emissions, if the risk of major disruption from climate change is to be lowered – and the middle of this century is not very far away, when measured in terms of the lifetime of our energy infrastructure.

My last post – If new nuclear doesn’t get built, it will be fossil fuels, not renewables, that fill the gap – tried to quantify the scale of the problem – all our impressive recent progress in implementing wind and solar energy will be wiped out by the loss of 60 TWh/ year of low-carbon energy that will happen over the next decade as the UK’s fleet of Advanced Gas Cooled Reactors are retired, and even with the most optimistic projections for the growth of wind and solar, without new nuclear build the prospect of decarbonising our electricity supply remains distant. And, above all, we always need to remember that the biggest part of our energy consumption comes from directly burning oil and gas – for transport, industry and domestic heating – and this needs to be replaced by more low carbon electricity. We need more nuclear energy.

The UK’s current nuclear new build plans are in deep trouble

All but one of our existing nuclear power stations will be shut down by 2030 – only the Pressurised Water Reactor at Sizewell B, rated at 1.2 GW will remain. So, without any new nuclear power stations opening, around 60 TWh a year of low carbon energy will be lost. What is the current status of our nuclear new build program? Here’s where we are now:

  • Hinkley point C – 3.2 GW capacity, consisting of 2 Areva EPR units, is currently under construction, with the first unit due to be completed by the end of 2025
  • Sizewell C – 3.2 GW capacity, consisting of 2 Areva EPR units, would be a duplicate of Hinkley C. The design is approved, but the project awaits site approval and an investment decision.
  • Bradwell B – 2-3 GW capacity. As part of the deal for Chinese support for Hinkley C, it was agreed that the Chinese state nuclear corporation CGN would install 2 (or possibly 3) Chinese designed pressurised water reactors, the CGN HPR1000. Generic Design Assessment of the reactor type is currently in progress, site approval and final investment decision needed
  • Wylfa – 2.6 GW,2 x 1.3 GW Hitachi ABWR. Generic Design Assessment has been completed, but the project has been suspended by the key investors, Hitachi.
  • Oldbury – 2.6 GW,2 x 1.3 GW Hitachi ABWR. A duplicate of Wylfa, project suspended.
  • Moorside, Cumbria, 3.4 GW, 3 x 1.1 Westinghouse AP1000, GDA completed, but the project has been suspended by its key investors, Toshiba.
  • So this leaves us with three scenarios for the post-2030 period.

    We can, I think, assume that Hinkley C is definitely happening – if that is the limit of our expansion of nuclear power, we’ll end up with about 24 TWh a year of low carbon electricity from nuclear, less than half the current amount.

    With Sizewell C and Bradwell B, which are currently proceeding, though not yet finalised, we’ll have 78 TWh a year – this essentially replaces the lost capacity from our AGR fleet, with a small additional margin.

    Only with the currently suspended projects – at Wylfa, Oldbury, and Moorside, would we be substantially increasing nuclear’s contribution to low carbon electricity, roughly doubling the current contribution at 143 TWh per year.

    Transforming the economics of nuclear power

    Why is nuclear power so expensive – and how can it be made cheaper? What’s important to understand about nuclear power is that its costs are dominated by the upfront capital cost of building a nuclear power plant, together with the provision that has to be made for safely decommissioning the plant at the end of its life. The actual cost of running it – including the cost of the nuclear fuel – is, by comparison, quite small.

    Let’s illustrate this with some rough indicative figures. The capital cost of Hinkley C is about £20 billion, and the cost of decommissioning it at the end of its 60 year expected lifespan is £8 billion. For the investors to receive a guaranteed return of 9%, the plant has to generate a cashflow of £1.8 billion a year to cover the cost of capital. If the plant is able to operate at 90% capacity, this amounts to about £72 a MWh of electricity produced. If one adds on the recurrent costs – for operation and maintenance, and the fuel cycle – of about £20 a MWh, this gets one to the so-called “strike price” – which in the terms of the deal with the UK government the project has been guaranteed – of £92 a MWh.

    Two things come out from this calculation – firstly, this cost of electricity is substantially more expensive than the current wholesale price (about £62 per MWh, averaged over the last year). Secondly, nearly 80% of the price covers the cost of borrowing the capital – and 9% seems like quite a high rate at a time of historically low long-term interest rates.

    EDF itself can borrow money on the bond market for 5%. At 5%, the cost of financing the capital comes to about £1.1 billion a year, which would be achieved at an electricity price of a bit more than £60 a MWh. Why the difference? In effect, the project’s investors – the French state owned company EDF, with a 2/3 stake, the rest being held by the Chinese state owned company CGN – receive about £700 million a year to compensate them for the risks of the project.

    Of course, the UK state itself could have borrowed the money to finance the project. Currently, the UK government can borrow at 1.75% fixed for 30 years. At 2%, the financing costs would come down from £1.8 billion a year to £0.7 billion a year, requiring a break-even electricity price of less than £50 a MWh. Of course, this requires the UK government to bear all the risk for the project, and this comes at a price. It’s difficult to imagine that that price is more than £1 billion a year, though.

    If part of the problem of the high cost of nuclear energy comes from the high cost of capital baked into the sub-optimal way the Hinkley Point deal has been structured, it remains the case that the capital cost of the plant in the first place seems very high. The £20 billion cost of Hinkley Point is indeed high, both in comparison to the cost of previous generations of nuclear power stations, and in comparison with comparable nuclear power stations built recently elsewhere in the world.

    Sizewell B cost £2 billion at 1987 prices for 1.2 GW of capacity – scaling that up to 3.2 GW and putting it in current money suggests that Hinkley C should cost about £12 billion.

    Some of the additional cost can undoubtedly be ascribed to the new safety features added to the EPR. The EPR is an evolution of the original pressurised water reactor design; all pressurised water reactors – indeed all light water reactors (which use ordinary, non-deuterated, water as both moderator and coolant) – are susceptible to “loss of coolant accidents”. In one of these, if the circulating water is lost, even though the nuclear reaction can be reliably shut down, the residual heat from the radioactive material in the core can be great enough to melt the reactor core, and to lead to steam reacting with metals to create explosive hydrogen.

    The experience of loss of coolant accidents at Three Mile Island and (more seriously) Fukushima has prompted new so-called generation III or gen III+ reactors to incorporate a variety of new features to mitigate potential loss-of-coolant accidents, including methods for passive backup cooling systems and more layers of containment. The experience of 9/11 has also prompted designs to consider the effect of a deliberate aircraft crash into the building. All these extra measures cost money.

    But even nuclear power plants of the same design cost significantly more to build in Europe and the USA than they do in China or Korea – more than twice as much, in fact. Part of this is undoubtedly due to higher labour costs (including both construction workers and engineers and other professionals). But there are factors leading to these other countries’ lower costs that can be emulated in the UK – they arise from the fact that both China and Korea have systematically got better at building reactors by building a sequence of them, and capturing the lessons learnt from successive builds.

    In the UK, by contrast, no nuclear power station has been built since 1995, so in terms of experience we’re starting from scratch. And our programme of nuclear new build could hardly have been designed in a way that made it more difficult to capture these benefits of learning, with four quite different designs being built by four different sets of contractors.

    We can learn the lessons of previous experiences of nuclear builds. The previous EPR installations in Olkiluoto, Finland, and Flamanville, France – both of which have ended up hugely over-budget and late – indicate what mistakes we should avoid, while the Korean programme – which is to-date the only significant nuclear build-out to significantly reduce capital costs over the course of the programme – offers some more positive lessons. To summarise –

  • The design needs to be finalised before building work begins – late changes impose long delays and extra costs;
  • Multiple units should be installed on the same site;
  • A significant effort to develop proven and reliable supply chains and a skilled workforce pays big dividends;
  • Poor quality control and inadequate supervision of sub-contractors leads to long delays and huge extra costs;
  • A successful national nuclear programme makes sequential installation of identical designs on different sites, retaining the learning and skills of the construction teams;
  • Modular construction and manufacturing techniques should be used as much as possible.
  • The last point supports the more radical idea of making the entire reactor in a factory rather than on-site. This has the advantage of ensuring that all the benefits of learning-by-doing are fully captured, and allows much closer control over quality, while making easier the kind of process innovation that can make significant reductions in manufacturing cost.

    The downside is that this kind of modular manufacturing is only possible for reactors on a considerably smaller scale than the >1 GW capacity units that conventional programmes install – these “Small Modular Reactors” – SMRs – will be in the range of 10’s to 100’s MW. The driving force for increasing the scale of reactor units has been to capture economies of scale in running costs and fuel efficiencies. SMRs will sacrifice some of these economies of scale, with the promise of compensating economies of learning that will drive down capital costs enough to compensate. Given that, for current large scale designs, the total cost of electricity is dominated by the cost of capital, this is an argument that is at least plausible.

    What the UK should do to reboot its nuclear new build programme

    If the UK is to stand any chance at all of reducing its net carbon emissions close to zero by the middle of the century, it needs both to accelerate offshore wind and solar, and get its nuclear new build programme back on track.

    It was always a very bad idea to try and implement a nuclear new build programme with more than one reactor type. Now that the Hinkley Point C project is underway, our choice of large reactor design has in effect been made – it is the Areva EPR.

    The EPR is undoubtedly a complex and expensive design, but I don’t think there is any evidence that it is fundamentally different in this from other Gen III+ designs. Recent experience of building the rival Westinghouse AP1000 design in the USA doesn’t seem to be any more encouraging. On the other hand, the suggestion of some critics that the EPR is fundamentally “unbuildable” has clearly been falsified by the successful completion of an EPR unit in Taishan, China – this was connected to the grid in December last year. The successful building of both EPRs and AP1000s in China suggest, rather, that the difficulties seen in Europe and the USA arise from systematic problems of the kind discussed in the last section rather than a fundamental flaw in any particular reactor design.

    The UK should therefore do everything to accelerate the Sizewell C project, where two more EPRs are scheduled to be built. This needs to happen on a timescale that ensure that there is continuity between the construction of Hinkley C and Sizewell C, to retain the skills and supply chains that are developed and to make sure all the lessons learnt in the Hinkley build are acted on. And it should be financed in a way that’s less insanely expensive than the arrangements for Hinkley Point C, accepting the inevitability that the UK government will need to take a considerable stake in the project.

    In an ideal world, every other large nuclear reactor built in the UK in the current programme should also be an EPR. But a previous government apparently made a commitment to the Chinese state-owned enterprise CGN that, in return for taking a financial stake in the Hinkley project, it should be allowed to build a nuclear power station at Bradwell, in Essex, using the Chinese CGN HPR1000 design. I think it was a bad idea on principle to allow a foreign government to have such close control of critical national infrastructure, but if this decision has to stand, one can find silver linings. We should respect and learn from the real achievements of the Chinese in developing their own civil nuclear programme. If the primary motivation of CGN in wanting to build an HPR1000 is to improve its export potential by demonstrating its compliance with the UK’s independent and rigorous nuclear regulations, then that goal should be supported.

    We should speed up replacement plans to develop the other three sites – Wylfa, Oldbury and Moorside. The Wylfa project was the furthest advanced, and a replacement scheme based on installing two further EPR units there should be put together to begin shortly after the Sizewell C project, designed explicitly to drive further savings in capital costs by maximising learning by doing.

    The EPR is not a perfect technology, but we can’t afford to wait for a better one – the urgency of climate change means that we have to start building right now. But that doesn’t mean we should accept that no further technological progress is possible. We have to be clear about the timescales, though. We need a technology that is capable of deployment right now – and for all the reasons given above, that should be the EPR – but we need to be pursuing future technologies both at the demonstration stage, and at the earlier stages of research and development. Technologies ready for demonstration now might be deployed in the 2030’s, while anything that’s still in the R&D stage now realistically is not likely to be ready to be deployed until 2040 or so.

    The key candidate for a demonstration technology is a light water small modular reactor. The UK government has been toying with the idea of small modular reactors since 2015, and now a consortium led by Rolls-Royce has developed a design for a modular 400 MW pressurised water reactor, with an ambition to enter the generic design approval process in 2019 and to complete a first of a kind installation by 2030.

    As I discussed above, I think the arguments for small modular are at the very least plausible, but we won’t know for sure how the economics work out until we try to build one. Here the government needs to play the important role of being a lead customer and commission an experimental installation (perhaps at Moorside?).

    The first light water power reactors came into operation in 1960 and current designs are direct descendents of these early precursors; light water reactors have a number of sub-optimal features that are inherent to the basic design, so this is an instructive example of technological lock-in keeping us on a less-than-ideal technological trajectory.

    There are plenty of ideas for fission reactors that operate on different principles – high temperature gas cooled reactors, liquid salt cooled reactors, molten salt fuelled reactors, sodium fast reactors, to give just a few examples. These concepts have many potential advantages over the dominant light water reactor paradigm. Some should be intrinsically safer than light water reactors, relying less on active safety systems and more on an intrinsically fail-safe design. Many promise better nuclear fuel economy, including the possibility of breeding fissile fuel from non-fissile elements such as thorium. Most would operate at higher temperatures, allowing higher conversion efficiencies and the possibility of using the heat directly to drive industrial processes such as the production of hydrogen.

    But these concepts are as yet undeveloped, and it will produce many years and much money to convert them into working demonstrators. What should the UK’s role in this R&D effort be? I think we need to accept the fact that our nuclear fission R&D effort has been so far run down that it is not realistic to imagine that the UK can operate independently – instead we should contribute to international collaborations. How best to do that is a big subject beyond the scope of this post.

    There are no easy options left

    Climate change is an emergency, yet I don’t think enough people understand how difficult the necessary response – deep decarbonisation of our energy systems – will be. The UK has achieved some success in lowering the carbon intensity of its economy. Part of this has come from, in effect, offshoring our heavy industry. More real gains have come from switching electricity generation from coal to gas, while renewables – particularly offshore wind and solar – have seen impressive growth.

    But this has been the easy part. The transition from coal to gas is almost complete, and the ambitious planned build-out of offshore wind to 2030 will have occupied a significant fraction of the available shallow water sites. Completing the decarbonisation of our electricity sector without nuclear new build will be very difficult – but even if that is achieved, that doesn’t even bring us halfway to the goal of decarbonising our energy economy. 60% of our current energy consumption comes from directly burning oil – for cars and trucks – and gas – for industry and heating our homes – much of this will need to be replaced by low-carbon energy, meaning that our electricity sector will have to be substantially increased.

    Other alternative low carbon energy sources are unpalatable or unproven. Carbon capture and storage has never yet deployed at scale, and represents a pure overhead on existing power generation technologies, needing both a major new infrastructure to be built and increased running costs. Scenarios that keep global warming below 2° C need so called “negative emissions technologies” – which don’t yet exist, and make no economic sense without a degree of worldwide cooperation which seems difficult to imagine at the moment.

    I understand why people are opposed to nuclear power – civil nuclear power has a troubled history, which reflect its roots in the military technologies of nuclear weapons, as I’ve discussed before. But time is running out, and the necessary transition to a zero carbon energy economy leaves us with no easy options. We must accelerate the deployment of renewable energies like wind and solar, but at the same time move beyond nuclear’s troubled history and reboot our nuclear new build programme.

    Notes on sources
    For an excellent overall summary of the mess that is the UK’s current new build programme, see this piece by energy economist Dieter Helm. For the specific shortcomings of the Hinkley Point C deal, see this National Audit Office report (and at the risk of saying, I told you so, this is what I wrote 5 years ago: The UK’s nuclear new build: too expensive, too late). For the lessons to be learnt from previous nuclear programmes, see Nuclear Lessons Learnt, from the Royal Academy of Engineering. This MIT report – The Future of Nuclear in a carbon constrained world – has much useful to say about the economics of nuclear power now and about the prospects for new reactor types. For the need for negative emissions technologies in scenarios that keep global warming below 2° C, see Gasser et al.

    How inevitable was the decline of the UK’s Engineering industry?

    My last post identified manufacturing as being one of three sectors in the UK which combined material scale relative to the overall size of the economy with a long term record of improving total factor productivity. Yet, as us widely known, manufacturing’s share of the economy has been in long term decline, from 27% in 1970 to 10.6% in 2014. Manufacturing’s share of employment has fallen even further, as a consequence of its above-average rate of improvement in labour productivity. This fall in importance of manufacturing has been a common feature of all developed economies, yet the UK has seen the steepest decline.

    This prompts two questions – was this decline inevitable, and does it matter? A recent book by industry veteran Tom Brown – Tragedy and Challenge: an inside view of UK Engineering’s Decline and the Challenge of the Brexit Economy, makes a strong argument that this decline wasn’t inevitable, and that it does matter. It’s a challenge to conventional wisdom, but one that’s rooted in deep experience. Brown is hardly the first to identify as the culprits the banks, fund managers, and private equity houses collectively described as “the City” – but his detailed, textured description of the ways in which these institutions have exerted their malign influence makes a compelling charge sheet against the UK economy’s excessive financialisation.

    Brown’s focus is not on the highest performing parts of manufacturing – chemicals, pharmaceuticals and aerospace – but on what he describes as the backbone of the manufacturing sector – medium technology engineering companies, usually operating business-to-business, selling the components of finished products in highly competitive, international supply chains. The book is a combination of autobiography, analysis and polemic. The focus of the book reflects Brown’s own experience managing engineering firms in the UK and Europe, and it’s his own personal reflections that provide a convincing foundation for his wider conclusions.

    His analysis rehearses the decline of the UK’s engineering sector, pointing to the wider undesirable consequences of this decline, both at the macro level, in terms of the UK’s overall declining productivity growth and its worsening balance of payments position, and at the micro level. He is particularly concerned by the role of the decline of manufacturing in hollowing out the mid-level of the jobs market, and exacerbating the UK’s regional inequality. He talks about the development of a “caste system of the southern Brahmins, who can’t be expected to leave the oxygen of London, and the northern Untouchables who should consider themselves lucky just to have a job”.

    This leads on to his polemic – that the decline of the UK’s engineering firms was not inevitable, and that its consequences have been regrettable, severe, and will be difficult to reverse.

    Brown is not blind to the industry’s own failings. Far from it – the autobiographical sections make clear what he saw was wrong with the UK’s engineering industry at the beginning of his career. The quality of management was terrible and industrial relations were dreadful; he’s clear that, in the 1970’s, the unions hastened the industry’s decline. But you get the strong impression that he believes management and unions at the time deserved each other, and a chronic lack of investment in new plant and machinery, and a complete failure to develop the workforce led to a severe loss of competitiveness.

    The union problem ended with Thatcher, but the decline continued and accelerated. Like many others, Brown draws an unfavourable comparison between the German and British traditions of engineering management. We hear a lot about the Mittelstand, but it’s really helpful to see in practise what the cultural and practical differences are. For example, Brown writes “German managers tend to be concerned about their people, and far slower to lay off in a downturn. Their training of both management and shop-floor employees is vastly better than the UK… in contrast many UK employees have expected skilled people to be available on demand, and if they fired them then they could rehire at will like the gaffer in the old ship yards”.

    For Brown, its no longer the unions that are the problem – it’s the City. It’s fair to say that he takes a dim view of the elevated position of the Financial Services sector since the Big Bang – “Overall the City is a major source of problems – to UK engineering, and to society as a whole. Much that has happened there is crazy, and still is. Many of our brightest and best have been sucked in and become personally corrupted.”

    But where his book adds real value is in going beyond the rhetoric to fill out the precise details of exactly how the City serves engineering firms so badly. To Brown, the fund managers and private equity houses that exert control over firms dictate strategies to the firms that are usually pretty much the opposite of what would be required for them to achieve long-term growth. Investment in new plant and equipment is starved due to an emphasis on short-term results, and firms are forced into futile mergers and acquisitions activity, which generate big fees for the advisors but are almost always counterproductive for the long-term sustainability of the firms, because they force them away from developing long-term, focused strategies. These criticisms echo many made by John Kay in his 2012 report, which Brown cites with approval, combined with disappointment that so few of the recommendations have been implemented.

    “I do not suffer fools gladly”, says Brown, a comment which sets the tone for his discussion of the fund management industry. While he excoriates fund managers for their lack of diligence and technical expertise, he condemns the lending banks for outright unethical and predatory behaviour, deliberately driving distressed companies into receivership, all the time collecting fees for themselves and their favoured partners, while stiffing the suppliers and trade creditors. The well-publicised malpractice of RBS’s “Global Restructuring Group” offers just one example.

    One very helpful section of the book discusses the way Private Equity operates. Brown makes the very important point that not enough people understand the difference between Venture Capital and Private Equity. The former, Brown believes, represents technically sophisticated investors creating genuine new value –
    “investing real equity, taking real risks, and creating value, not just transferring it”.

    But what too many politicians, and too much of the press fail to realise is that genuine Venture Capital in the UK is a very small sector – in 2014, only £0.3 billion out of a total £4.3 billion invested by BVCA members fell into this category. Most of the investment is Private Equity, in which the investments are in existing assets.

    “The PE houses’ basic model is to buy companies as cheaply as possible, seek to “enhance” them, and then sell them for as much as possible in only three years’ time, so it is extremely short-termist. They “invest” money in buying the shares of these companies from the previous owners, but they invest as little as possible into the actual companies themselves; this crucial distinction is often completely misunderstood by the government and the media who applied the PE houses for the billions they are “investing” in British industry… in fact, much more cash is often extracted from these companies in dividends than is ever invested in them”.

    To Brown, much Private Equity is simply a vehicle for large scale tax avoidance, through eliding the distinction between debt and equity in “complex structures that just adhere to the letter of the law”. These complex structures of ownership and control lead to a misalignment of risk and reward – when their investments fail, as they often do, the PE houses get some of their investment back as it is secured debt, while trade suppliers, employees and the taxpayer get stiffed.

    To be more positive, what does Brown regard as the ingredients for success for an engineering firm? His list includes:

  • an international outlook, stressing the importance of being in the most competitive markets to understand your customers and the directions of the wider industry;
  • a long-term vision for growth, stressing innovation, R&D, and investment in latest equipment;
  • conservative finance, keeping strong balance sheet to avoid being knocked off course by the inevitable ups and downs of the markets, allowing the firm to keep control of its own destiny;
  • a focus on the quality of people – with managements who understand engineering and are not just from a financial background, and excellent training for the shop floor workers.
  • The book focuses on manufacturing and engineering, but I suspect many of its lessons have a much wider applicability. People interested in economic growth and industrial strategy necessarily, and rightly, focus on statistics, but this book offers an invaluable additional dimension of ground truth to these discussions.