Revisiting the UK’s nuclear AGR programme: 3. Where next with the UK’s nuclear new build programme? On rebuilding lost capabilities, and learning wider lessons

This is the third and concluding part of a series of blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects.

In the second post, “What led to the AGR decision? On nuclear physics – and nuclear weapons” I turned to consider the technical and political issues that led to this decision.

In this post, I bring the story up to date, discussing why post-2010 plans for new nuclear build have largely failed, and look to the future, with new ambitions for small modular reactors – and, ironically, a potential return to high temperature, gas cooled reactors that represent an evolution of the AGR.

Into the 2010’s and beyond – the UK’s failed Nuclear New Build programme

In the early 2010’s, the Coalition Government developed an ambitious plan to replace the UK’s ageing nuclear fleet, with new light water reactors to be built on the existing nuclear sites, involving four different designs from four different vendors. The French state nuclear company was to build 2 of its next generation pressurised water reactors – the European Pressurised Water Reactor (EPR) – at Hinkley, and another 2 at Sizewell. The Chinese state nuclear corporation, CGN would install 2 (or possibly 3) of its own PWR designs at Bradwell. At Moorside, in Cumbria, Toshiba/Westinghouse would build 3 of its AP1000 PWRs. At Wylfa, in North Wales, Hitachi would build two Advanced Boiling Water Reactors, with another two ABWRs to be built at Oldbury. In total this would give 18 GW of new nuclear capacity, producing roughly double the output of the AGR fleet. In 2013, this programme formally got underway, with the announcement of a deal with EDF to deliver the first of these new plants, at Hinkley Point.

This programme has largely failed. A decade on, only one project is under construction – Hinkley Point C, where the best estimate for when the two EPRs will come into service is 2030. The cost for this 3.2 GW capacity is now estimated as being between £31 bn and £34 bn, in 2015 prices, compared to an original estimate of £20 bn. To put this into context, the last nuclear power station built in the UK, the PWR at Sizewell B, cost about £2 bn, in 1987 prices for a 1.2 GW unit. Scaling this to the 3.2 GW capacity of the Hinkley Point project, and accounting for inflation, this would correspond to about £12 bn in 2015 prices. Where has this 250% increase in nuclear construction cost since Sizewell B come from? There are essentially two broad classes of reasons.

Firstly, more recent designs of pressurised water reactor, such as the EPR, or the Westinghouse AP1000, have a number of new safety features, to mitigate some of the fundamental weaknesses of the pressurised water reactor design, particularly its vulnerability to loss of coolant accidents. These new features include methods for passive cooling in the case of loss of power to the main cooling system, a “core catcher” system which contains molten core material in the event of a meltdown, and more robust containment systems, designed to resist, for example, an aircraft crashing into the reactor building. These new features all add unavoidable extra cost.

In addition to these unavoidable cost increases, some of the increase in construction cost must reflect a substantial real reduction in the UK’s ability to deliver a big complex project like a nuclear power station. One would hope that, if subsequent power stations are built to the same design with the construction teams kept in place, in the light of experience, the development of functional supply chains, and the creation of a skilled workforce, these costs could be reduced.

A sister plant to Hinkley Point, at Sizewell, has received a nuclear site license, but awaits a final investment decision. The capital for Hinkley Point C was provided entirely by its investors, which included the French state-owned energy company EDF and the Chinese state nuclear company CGN, in return for a guarantee of a fixed price for the electricity the plant generated over the first 35 years of operation. Thus the cost of the overrun in budget is borne by the investors, not the UK government or UK consumers. The deal was constructed in a way that was very favourable to the investors, so there was some cushion there, but the experience of Hinkley Point C means that it’s now impossible to attract investors to build further power stations on these terms. The financing for Sizewell C, if it goes ahead, will involve more direct UK state investment, as well as payments to the company building it while the reactor is under construction. These up-front payments will be added to electricity consumers’ bills through the so-called “Regulated Asset Base” mechanism, reducing the cost to the company of borrowing money during the long construction period.

So, sixteen years on from the in-principle commitment to return to nuclear power, no plant has yet been completed, and the best that can be hoped for from the plan to build 18 GW of new capacity is that we will have 6.4 GW of capacity from Hinkley C, and Sizewell C, if the latter goes ahead.

Why has the UK’s nuclear new build programme failed so badly? The original plans were misconceived on many levels. The plan to involve the Chinese state so closely seemed naive at the time, and given the changed geopolitical environment since then, it now seems almost unbelievable that a UK government could countenance it. The idea of having multiple competing vendors and designs makes it much more difficult to drive costs down through “learning by doing”; the most successful build-outs of nuclear power – in France and Korea – have relied on “fleet build” – sequential installations of standardised designs. And the reliance on overseas investors and overseas designs meant that the UK had no control over the supply chain, meaning that little of the high value work involved in the programme would benefit the UK economy.

At the root of this failure were the UK government’s unwise ideological commitments to privatised energy markets, making it resist any subsidies for nuclear power, and refuse to issue new government debt to pay for infrastructure. The legacy of the run-down of the UK’s civil nuclear programme in the 1990’s was a lack of significant UK government expertise in the area, making it an uninformed and naive customer, and a lack of an industry in the UK in a position to benefit from the expenditure.

Could there be another way? Since 2014, the UK government has expressed interest in the idea of small modular reactors (SMRs), and has given some support for design studies, with the UK company Rolls-Royce setting up a unit to commercialise them.

Back to the future – hopes for light water small modular reactors

There’s been a seemingly inexorable trend towards larger and larger pressurised water reactors – and, as we have seen at Hinkley C, that trend of increasing size has been accompanied by a dismal record of cost overruns and construction delays. There are, in principle, economies of scale in operating costs to be gained with very large units. But, as I’ve stressed above, the economics of nuclear power is dominated by the upfront capital cost of building reactors in the first place. If one, instead, built multiple smaller reactors, small enough for much of the construction to take place in factories, where manufacturing processes could be optimised over multiple units, one might hope to drive the costs down through “learning by doing”. This is the logic behind the enthusiasm for small modular reactors.

There’s nothing new about a small pressurised water reactors – by the standards of today’s power reactors, Admiral Rickover’s submarine reactors were tiny. Significantly, as I discussed above, the only remaining UK capability in nuclear reactors is to be found in Rolls-Royce, the company that makes reactors for the UK Navy’s submarines. But the design criteria for a submarine reactor and for a power reactor are very different – while the experience of designing and manufacturing submarine reactors will have some general value in the civil sector, the design of a civil small modular reactor will need to be very different to a submarine reactor.

Rolls-Royce is one of five companies currently bidding for a role in a UK civil SMR programme. Its design has currently passed the second of three stages in the process of getting regulatory approval for the UK market. The Rolls-Royce proposal is for a 470 MWe pressurised water reactor, using conventional PWR fuel of low enrichment (in contrast to the very highly enriched fuel used in submarine reactors). The design is entirely new, though technically rather conservative.

A power output of 470 MWe is not, in fact, that small – this is very much in the range of reactor powers of civil PWRs that were being built in the early 1970’s – compare, for example, the VVER-440 reactors built by the USSR and widely installed and operating in the former USSR and Eastern Europe. The Rolls-Royce design, in contrast to the VVER-440s, does include the safety features to be found in the larger, recent PWR designs, including much more robust confinement, “core catcher”, and passive cooling to cope with a loss of coolant accident, and it will incorporate much more modern materials, control systems, and manufacturing technologies.

There have been suggestions that SMRs could be sited more widely across the country, in towns and cities outside regular nuclear sites. This isn’t the plan for any UK SMRs – they are in any case too large for this to make sense. Instead, the idea is to have multiple installations in existing licensed nuclear sites, such as Wylfa and Oldbury. The Rolls-Royce design is currently undergoing the final stage of its generic design approval. It is one of five potential vendors currently participating in a UK government competition for further support towards deployment of a light water small modular reactor in the UK.

The other entrants to the SMR competition are two well-established vendors of large light water reactors – Westinghouse and GE-Hitachi, and two more recent entrants into the market, from the USA – Holtec and NuScale. Since none of these companies has actually delivered an SMR, the decision will have to be made on judgements about capability: experience shows us that there can be no certainty about cost until one has been built. But, in making the decision, the UK government will need to decide how strongly to weight the need to rebuild UK industrial capacity and nuclear expertise against pure “value for money” criteria.

The Next Generation? Advanced Modular Reactors

The light water SMR represents an incremental update of a technology developed in the 1950’s, at a scale that was being widely deployed in the 1970’s. Is it possible to break out from the technological lock-in of the light water reactor, to explore more of the very wide possible design space of possible power reactors? That is the thinking behind the idea of developing an Advanced Modular Reactor – keeping the principle of relatively small scale and factory based modular construction, but using fundamentally different reactor designs, with different combinations of moderator and coolant to achieve technical advantage over the light water reactor. In particular, it would be very attractive to have a reactor that ran at a significantly higher temperature than a light water reactor. A high temperature reactor would have higher conversion efficiency to electrical power, and in addition it might be possible to use the heat directly to drive industrial processes – for example making hydrogen as an energy vector and as a non-oil based feedstock for the petrochemical industry, including to make synthetic hydrocarbons for zero carbon aviation fuel.

We are also seeing a resurgence of interest in reactors using unmoderated (fast) neutrons. This is partly motivated by the possibility of breeding fissile material, thus increasing the efficiency of fuel use, and partly by the fact that fast neutrons can induce fission in the higher actinides that are particularly problematic as contaminants of used nuclear fuel. There’s an attractive symmetry in the idea of using the UK’s very large stock of civil plutonium to “burn up” nuclear waste.

The UK government commissioned a technical assessment of potential candidates for an advanced modular reactor. This considered fast reactors cooled by liquid metals – both sodium and lead, as well as a gas-cooled fast reactor. Another intriguing possibility that has generated recent interest is the molten salt reactor, where the fissile material is dissolved in fluoride salts. Here the molten salt acts both as fuel and coolant. Reactor designs using a thermal neutron spectrum include an evolution of the boiling water reactor which uses water in the supercritical state. All of these designs have potential advantages, but the judgement of the study was that, of these potential designs, only the sodium fast reactor was potentially close enough to deployment to be worth considering.

However, the study made a clear recommendation in favour of a high temperature, gas cooled thermal neutron reactor. Here, the moderator is graphite and the coolant is helium, as in the Advanced Gas Cooled Reactors. The main difference with AGRs is that, in order to operate at higher temperatures, the fuel is presented in spherical particles around a millimetre in diameter, in which uranium oxide is coated with graphite and encapsulated in a high temperature resistant refractory ceramic such as silicon carbide. There is considerable world-wide experience in making this so-called tristructural isotropic (TRISO) fuel, which is able to withstand operating temperatures in the 700 – 850 °C range. Modifications of these fuel particles – for example using zirconium carbide as the outer later – could permit operation at even higher temperatures, high enough to split water into hydrogen and oxygen through purely thermochemical processes. But this would need further research.

A Chronicle of Wasted Time

What’s striking about many of the proposals for an advanced modular reactor is that the concepts are not new. For example, work on sodium cooled fast reactors began in the UK in the 1950s, with a full scale prototype being commissioned in 1974. Lead cooled reactors were built in both the USA and the USSR. Molten salt reactors perhaps represent the most radical design departure, but even here, a working prototype was developed in Oak Ridge National Laboratory, USA, in the 1960s.

One of the reasons for the UK AMR Technical Assessment favouring the High Temperature Gas Reactor is that it builds on the experience of the UK in running a fleet of gas cooled, graphite moderator reactors – the AGRs. In fact, the UK, as part of an international collaboration, operated a prototype high temperature gas reactor between 1964 and 1976 – DRAGON. It was in this project that the TRISO fuel concept was developed, which has since been used in operational high temperature gas reactors in the USA, Germany, Japan and China.

At the peak of the 1970’s energy crisis, from 1974 to 1976, construction began on more than a hundred nuclear reactors across the world. Enthusiasm for nuclear power dwindled throughout the 1980’s, suppressed on the one hand by the experience of nuclear accidents at Three Mile Island and Chernobyl, and on the other by an era of cheap and abundant fossil fuels. In the three years between 1994 to 1996, just three new reactors were begun worldwide. In this climate, there was no appetite for new approaches to nuclear power generation, technology development stagnated, and much tacit knowledge was lost.

Some concluding thoughts

In 1989, the UK’s Prime Minister Margaret Thatcher made an important speech to the United Nations highlighting the importance of climate change. It was her proposal that the work of the Intergovernmental Panel on Climate Change was extended beyond 1992, and that there should be binding protocols on the reduction of greenhouse gases; naturally, given her political perspective, she stressed the importance of generating continued economic growth, and of the importance of private sector industry in driving innovation. She reasserted her support for nuclear power, which she described as “the most environmentally safe form of energy”. As far as the UK was concerned, “we shall be looking more closely at the role of non-fossil fuel sources, including nuclear, in generating energy.”

Since Thatcher’s speech, another thousand billion tonnes of carbon dioxide have been released into the atmosphere from industry and burning fossil fuels, leading to an increase in the atmospheric concentration of CO2 from 350 parts per million in 1989 to 427 ppm now. To be fair, one should recognise that the worldwide nuclear power industry has produced 390,000 tonnes of spent nuclear fuel, producing 29,000 cubic meters of high level waste. This needs to be permanently disposed of in deep geological repositories, the first of which is nearing completion in Finland.

But even as Thatcher was speaking, the expansion of nuclear power was stalling. In the UK it was Thatcher’s own Chancellor of the Exchequer who had in effect killed nuclear power, through the lasting impact of his ideological commitment to privatised energy markets in an environment of cheap fossil fuels.

To be clear, what killed the UK’s nuclear energy programme was not a wrong choice of reactor design; it was a combination of high interest rates and low fossil fuel prices, all in the context of a worldwide retreat from nuclear new build, with a strong anti-nuclear movement, driven by nuclear accidents in Three Mile Island and Chernobyl, by the (correctly) perceived connection between civil nuclear power and nuclear weapons programmes, and by the problem of nuclear waste. The circumstances of the UK were particularly helpful for a continued dependence on fossil fuels; the discovery of North Sea oil and gas gave the UK, now a net energy exporter, a 15 year holiday from having to worry about the geopolitics of energy dependence.

But, for industrial nations, security of access to adequate energy supplies has always been an issue of existential importance, too often driving conflict and war. The Ukrainian war has given us a salutary reminder of the importance of energy supplies to geopolitics. Energy is never just another commodity.

The effective termination of the UK’s civil nuclear programme in the 1990’s undoubtedly saved money in the short-term. That money could have been used for investment – future-proofing the UK’s infrastructure, in supporting R&D to create new technologies. Political choices meant that it wasn’t – this was a period of falling public and private investment – instead it supported consumption. But there were costs, in terms of losing capacity, in industry and the state. Technological regression is possible, and one could argue that this has happened in civil nuclear power. In the UK, we have felt the loss of that capacity now that policy has changed, very directly in the failure of the last decade’s new nuclear build. Energy decisions should never just be about money.

Revisiting the UK’s nuclear AGR programme: 2. What led to the AGR decision? On nuclear physics – and nuclear weapons

This is the second of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government. In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects, and in particular, the influence of an article by Dennis Henderson that was highly critical of the AGR decision. In this post, I go into some detail to try to understand why the decision was made.

According to Thomas Kelsey, writing in his article When Missions Fail: Lessons in “High Technology” from post-war Britain, the decision to choose the Advanced Gas Cooled reactor design for the UK’s second generation reactor programme was forced through by “state technocrats, hugely influential scientists and engineers from the technical branches of the civil service”; sceptics did exist, but they were isolated in different departmental silos, and unable to coordinate their positions to present a compelling counter view.

But why might the scientists and engineers have been so convinced that the AGR was the right way to go, rather than the rival US designed Pressurised Water Reactor, making what Henderson argued, in his highly influential article “Two British Errors: Their Probable Size and Some Possible Lessons”, was one of the UK government’s biggest policy errors? To go some way to answering that, it’s necessary to consider both physics and history.

Understanding the decision to choose advanced gas cooled reactors: the physics underlying nuclear reactor design choices

To start with the physics, what are the key materials that make up a fission reactor, and what influences the choice of materials?

Firstly, one needs a fissile material, which will undergo a chain reaction – a nucleus, that when struck by a neutron, will split, releasing energy, and emitting a handful of extra neutrons, that go on to cause more fission. The dominant fissile material in today’s civil nuclear programmes is Uranium-235, the minority isotope that makes up 0.72% of natural uranium (the rest of it being uranium-238, which is mildly radioactive but not fissile). To make reactor fuel, one generally needs to “enrich” the uranium, increasing the concentration of U-235, typically, for civil purposes, to a few percent. Enrichment is a complex technology inextricably connected with nuclear weapons – the enrichment needed to make weapons grade uranium is different in degree, not kind, from that needed for civil power. One also needs to consider how the fissile material – the nuclear fuel – is to be packaged in the reactor.

Secondly, one needs a moderator. The neutrons produced in fission reactions are going too fast to be efficient at inducing further fissions, so they need to be slowed down. (As I’ll discuss below, it is possible to have a reactor without moderation – a so-called fast-neutron reactor. But because of the lower absorption cross-section for fast neutrons, this needs to use a much higher fraction of fissile material – highly enriched uranium or plutonium).

In a normal reactor, the purpose of the moderator is to slow down the neutrons. Moderators need to be made of a light element which doesn’t absorb neutrons too much. The main candidates are carbon (in the form of graphite), hydrogen (in the form of water) or deuterium, the heavier isotope of hydrogen (in the form of water). Hydrogen absorbs neutrons more than deuterium does, so it’s less ideal as a moderator, but is obviously much cheaper.

Finally, one needs a coolant, which takes away the heat the fission reactor produces, so the heat can be extracted and converted to electricity in some kind of turbine. The choice here, in currently operating reactors, is between normal water, heavy water, and a non-reactive gas (either carbon dioxide or helium). Experimental designs use more exotic cooling materials like molten salts and liquid metals.

So the fundamental design choice for a reactor is the choice of moderator and coolant – which dictate, to some extent, the nature of the fuel. The variety of possible combinations of moderators and coolants means that the space of possible reactor designs is rather large, but only a handful from this choice of potentials technologies is in widespread use. The most common choice is to use ordinary water as both coolant and moderator – in so-called light water reactors (“light water” in contrast to “heavy water”, in which the normal hydrogen of ordinary water is replaced by hydrogen’s heavier isotope, deuterium). Light water is an excellent coolant, cheap, and convenient to use to drive a steam turbine to generate electricity. But it’s not a great moderator – it absorbs neutrons, so a light water reactor needs to use enriched uranium as fuel, and the core needs to be relatively small.

These weren’t problems for the original use of pressurised water reactors (PWRs, the most common type of light water reactor. The other variety, Boiling Water Reactors, similarly uses light water as both coolant and moderator, the difference with PWRs being that steam is generated directly in the reactor core rather than in a secondary circuit). These were designed to power submarines, in a military context where enriched uranium was readily available, and where a compact size is a great advantage. But it underlies the great weakness of light water reactors – their susceptibility to what’s known as a “loss of coolant accident”. The problem is that, if for some reason the flow of cooling water is stopped, even if the chain reaction is quickly shut down (and this isn’t difficult to do) the fuel produces so much heat through its radioactive decay that it can melt the fuel rods, as happened in Three Mile Island. What’s worse, the alloy that the fuel rod is clad in can react with hot steam to produce hydrogen, that can explode, as happened at Fukushima.

In contrast to light water, heavy water is an excellent moderator. Although deuterium and (normal) hydrogen are (nearly) chemically identical, the interaction of neutrons with their nuclei is very different – deuterium absorbs neutrons much less than hydrogen. Heavy water is just as good a coolant as light water, so a reactor with heavy water as both moderator and coolant can be run with unenriched uranium oxide as fuel. The tradeoff, then, is the ability to do without a uranium enrichment plant, at the cost having to use expensive and hard to make heavy water in large quantities. This is the basis of the Canadian CANDU design.

Another highly effective moderator is graphite (if it’s of sufficiently high purity). But being a solid, a separate coolant is needed. The UK’s Magnox stations used carbon dioxide as a coolant and natural, unenriched uranium metal as a fuel; it was a development of this design that formed the Advanced Gas Cooled Reactor (AGR), which used lightly enriched uranium oxide as a fuel. The use of gas rather than water as the coolant makes it possible to run the reactor at a higher temperature, which allows a more efficient conversion of heat to electricity, while the lower neutron absorption of the moderator and coolant than for light water means that the core is less compact.

Another approach is to use graphite as the moderator, but to use light water as the coolant. The use of light water reduces the neutron efficiency of the design, so the fuel needs to be lightly enriched. This is the basis of the Soviet Union’s RBMK reactor. This design is cheap to build, but it has a very ugly potential failure mode. If the cooling water starts to boil, the bubbles of steam absorb fewer neutrons than the water they replace, and this means the efficiency of the chain reaction can increase, leading to a catastrophic runaway loss of control of the fission reaction. This is what happened at Chernobyl, the world’s worst nuclear accident to date.

Understanding the decision to choose advanced gas cooled reactors: the history of the UK nuclear weapons programme, and its influence on the civil nuclear programme

In the beginning, the purpose of the UK’s nuclear programme was to produce nuclear weapons – and the same can be said of other nuclear nations, USA and USSR, France and China, India and Pakistan, Israel and North Korea. The physics of the fission reaction imposes real constraints on the space of possible reactor designs – but history sets a path-dependence to the way the technology evolved and developed, and this reflects the military origins of the technology.

A nuclear weapon relies on the rapid assembly of a critical mass of a highly fissile material. One possible material is uranium – but since it’s only the minority Uranium-235 isotope that is fissile, it’s necessary to separate this from the Uranium-238 that constitutes 99.28% of the metal as it is found in nature. The higher the degree of enrichment, the smaller the critical mass required; in practise, enrichments over 60% are needed for a weapon. There is an alternative – to use the wholly artificial element plutonium. The fissile isotope plutonium-239 is formed when uranium-238 absorbs a neutron, most conveniently in a fission reactor.

As the history of nuclear weapons is usually told, it is the physicists who are usually given the most prominent role. But there’s an argument that the crucial problems to be overcome were as much ones of chemical engineering as physics. There is no chemical difference between the two uranium isotopes that need to be separated, so any process needs to rely on physical properties that depend on the tiny difference in mass between the two isotopes. On the other hand, to obtain enough plutonium to build a weapon, one needs not just to irradiate uranium in a reactor, but then use chemical techniques to extract the plutonium from a highly radioactive fuel element.

In 1941, the wartime UK government had concluded, based on the work of the so-called MAUD committee, that nuclear weapons were feasible, and began an R&D project to develop them – codenamed “Tube Alloys”. In 1943 the UK nuclear weapons programme was essentially subsumed by the Manhattan Project, but it was always the intention that the UK would develop nuclear weapons itself when the war ended. The pre-1943 achievements of Tube Alloys are often overlooked in the light of the much larger US programme, but one feature of it is worth pointing out. The UK programme was led by the chemical giant ICI; this was resented by the academic physicists who had established the principles by which nuclear weapons would work. However, arguably it represented a realistic appraisal of where the practical difficulties of making a weapon would lie – in obtaining sufficient quantities of the fissile materials needed. Tube Alloys pursued an approach to uranium enrichment based on the slightly different mass-dependent diffusion rates of uranium hexafluoride through porous membranes. This relied on the expertise in fluorine chemistry developed by ICI in Runcorn in the 1930’s, and came to fruition with the establishment of a full-scale gaseous diffusion plant in Capenhurst, Cheshire, in the late 40s and early 50s.

After the war, the UK was cut off from the technology developed by the USA in the Manhattan project, with the 1946 McMahon Act formally prohibiting any transfer of knowledge or nuclear materials outside the USA. The political imperative for the UK to build its own nuclear weapon is summed up by the reported comments of Ernest Bevin, the Foreign Secretary in the postwar Labour government: “We’ve got to have this thing over here, whatever it costs. We’ve got to have the bloody Union Jack on top of it.”

But even before a formal decision to make a nuclear weapon was made, in 1947, the infrastructure for the UK’s own nuclear weapons programme had been put in place, reflecting the experience of the returning UK scientists who had worked on the Manhattan Project. The first decision was to build a nuclear reactor in the UK, to make plutonium. This reflected the experience of the Manhattan project, which had highlighted the potential of the plutonium route to a nuclear weapon.

To put it crudely, it turned out to be easier to make a bomb from highly enriched uranium than from plutonium, but it was easier to make plutonium than highly enriched uranium. The problem with the plutonium route to the bomb is that irradiating uranium-235 with neutrons produces not just the fissile isotope Plutonium-239, but trace amounts of another isotope, Plutonium-240. Plutonium-240 undergoes spontaneous fission, emitting neutrons. Because of this the simplest design of a nuclear weapon – the gun design used for the Hiroshima bomb – will not work for plutonium, as the spontaneous fission causes premature detonation and low explosive yields. This problem was solved by the development of the much more complex implosion design, but there are still hard limits on the levels of plutonium-240 that can be tolerated in weapons grade plutonium, and these impose constraints on the design of reactors used to produce it.

The two initial UK plutonium production reactors were built in Sellafield – the Windscale Piles. The fuel was natural, unenriched, uranium (necessarily, because the uranium enrichment plant in Capenhurst had not yet been built), so this dictated the use of a graphite moderator. The reactors were air-cooled. The first reactor started operations in 1951, with the first plutonium produced in early 1952, enabling the UK’s first, successful, nuclear weapon test in October 1952.

But even as the UK’s first atom bomb test was successful, it was clear that the number of weapons the UK’s defense establishment was calling for would demand more plutonium than the Windscale piles could produce. At the same time, there was growing interest in using nuclear energy to generate electricity, at a time when coal was expensive and in short supply, and oil had to be imported and paid for with scarce US dollars. The decision was made to combine the two goals, with second generation plutonium producing reactors also producing power. The design would use graphite moderation, as in the Windscale piles, and natural uranium as a fuel, but rather than being air-cooled, the coolant was high pressure carbon dioxide. The exclusion of air made it possible to use a magnesium alloy as the casing for the fuel, which absorbed fewer neutrons than the aluminium used before.

The first of this new generation of dual purpose reactors – at Calder Hall, near Sellafield – was opened in 1956, just four years after the decision to build it. Ultimately four reactors of this design were produced – two at Calder Hall, and two at Chapelcross in Scotland. It’s important to stress that, although these reactors did supply power to the grid, they were optimised to produce plutonium for nuclear weapons, not to produce electricity efficiently. The key feature that this requirement dictated was the need to remove the fuel rods while the reactor was running; for weapons grade plutonium the exposure of uranium-238 to neutrons needs to be limited, to keep the level of undesirable plutonium 240 low. From the point of view of power production, this is sub-optimal, as it significantly lowers the effective fuel efficiency of the reactor; it also produces significantly greater quantities of nuclear waste.

The first generation of UK power reactors – the Magnox power stations – were an evolution of this design. Unlike Calder Hall and Chapelcross, they were under control of the Central Electricity Generating Board, rather than the Atomic Energy Authority, and were run primarily to generate electricity rather than weapons grade plutonium, using longer burn up times that produced plutonium with high concentrations of Pu-240. This so-called “civil plutonium” was separated from the irradiated fuel – there is now a stockpile of about 130 tonnes of this. Did the civil Magnox reactors produce any weapons grade plutonium? I don’t know, but I believe that there is no technical reason that would have prevented that.

Fast neutron reactors and the breeder dream

A reactor that doesn’t have a moderator is known as a fast-neutron reactor. This uses neutrons at the energy they have when emitted from the fission reaction, without slowing them down in a moderator. As mentioned above, the probability of a fast neutron colliding with a fissile nucleus is smaller than for a slow neutron, so this means that a fast-neutron reactor needs to use a fuel with a high proportion of fissile isotopes – either uranium highly enriched in U-235, or plutonium (both need to be in the form of the oxide, so the fuel doesn’t melt). In the absence of a moderator, the core of a fast neutron reactor is rather small, producing a lot of heat in a very small volume. This means that neither water nor gas is good enough as a coolant – fast neutron reactors to date have instead used liquid metal, most commonly molten sodium. As one might imagine, this poses considerable engineering problems.

But fast-neutron reactors have one remarkable advantage which has made many countries persist with a fast-neutron reactor programme, despite the difficulties. A fission reaction prompted by a fast neutron produces, on average, more additional neutrons than fission prompted by a slow neutron. This means that a fast-neutron reactor can produce more neutrons than are needed to maintain the chain reaction, and these additional neutrons can be used to “breed” additional fissile material. In effect, a fast-neutron reactor can produce more reactor fuel than it consumes, for example by converting non-fissile uranium-238 into fissile plutonium-239, or converting non-fissile thorium-232 into another fissile isotope of uranium, uranium-233.

In the 1940s and 50s, the availability of uranium relative to the demand of weapons programmes was severely limited, so the prospect of extracting energy from the much more abundant U-238 isotope was very attractive. Design studies for a UK fast neutron reactor started as early as 1951, with the strong backing of Christopher Hinton, the hard-driving ex-ICI engineer who ran the UK’s nuclear programme. An experimental fast reactor was built at Dounreay, in Caithness, which was completed by 1959. Using this experience, it was decided in 1966 to build a prototype fast power reactor, cooled with liquid sodium, with a 250 MW design electrical output.

The worldwide expansion of nuclear power in the 1970s seemed to strengthen the case for a breeder reaction even further, so the commissioning of the prototype fast reactor in 1974 seemed timely. However, in common with the experience of fast reactors elsewhere in the world, reliability was a problem, and the Dounreay reactor never achieved even 50% of its design output. Moreover, following the 1979 Three Mile Island accident, the worldwide expansion of nuclear power stalled, and the price of Uranium collapsed, undercutting the economic rationale for breeder reactors.

The winding down of the UK’s experiment with fast breeders was announced in Parliament in 1988: “The Government have carried out a review of the programme in the light of the expectation that commercial deployment of fast reactors in the United Kingdom will not now be required for 30 to 40 years. Our overall aim in the review has been to retain a position in the technology for the United Kingdom at economic cost.” Operations on the Dounreay prototype fast breeder came to an end in 1994, and in effect the UK’s position in the technology was lost. In the UK, as elsewhere in the world, the liquid metal cooled fast neutron breeder reactor proved a technological dead-end, where it remains – for now.

Submarines

Bombs are not the only military application of nuclear energy. Even before the 2nd World War ended, it was appreciated that a nuclear reactor would be an ideal power source for a submarine. Diesel-electric submarines need to surface frequently to run their engines and recharge their batteries; a submarine with a long-term power source that didn’t need oxygen, able to remain underwater for months on end, would be transformational for naval warfare. In the UK, work on a naval reactor began in the early 1950’s, and the UK’s first nuclear powered submarine, HMS Dreadnought, was launched in 1960. But HMS Dreadnought didn’t use UK nuclear technology; instead it was powered by a reactor of US design, a pressurised water reactor, using light water both as moderator and as coolant.

The father of the US nuclear navy was an abrasive and driven figure, Admiral Rickover. Rickover ran the US Navy’s project to develop a nuclear submarine, initially working at Oak Ridge National Laboratory in the late 1940’s. He selected two potential reactor designs – the pressurised water reactor devised by the physicist Alvin Weinberg, and a liquid sodium cooled, beryllium moderated reactor. Both were developed to the point of implementation, but it was the PWR that was regarded as the best (and particularly, the most reliable) design, and has been subsequently used for all Western nuclear submarines.

The prototype reactor went critical at a land-based test installation in 1953. At this time the first submarine was already under construction; the USS Nautilus went to sea only two years later, in 1955. The UK’s effort lagged considerably behind. In 1958, following the thawing of nuclear relations between the UK and the USA, Admiral Rickover offered the UK a complete nuclear propulsion system. It seems that this deal was sealed entirely on the basis of the personal relationship between Rickover and the UK’s Admiral of the Fleet, Lord Mountbatten. It came with two conditions. The first was that it should be a company to company deal, between the US contractor Westinghouse and the UK firm Rolls-Royce, rather than a government to government agreement. The second was that it was a one-off – Rolls-Royce would have a license to the Westinghouse design for a pressurised water reactor, but after that the UK was on its own. These two conditions have meant that there has been a certain separation between the UK’s naval reactor programme, as Rolls-Royce has developed further iterations of the naval PWR design, and the rest of its national nuclear enterprise.

Rickover’s rapid success in creating a working power reactor for submarines had far-reaching consequences for civil nuclear power. President Eisenhower’s 1953 “Atoms for Peace” speech committed the USA to developing civilian applications, and the quickest way to deliver on that was to build a nuclear power station building on the submarine work. Shippingport opened in 1957 – it was essentially a naval reactor repurposed to power a static power station, and was wholly uneconomic as an energy source, but it launched Westinghouse’s position as a supplier of civil nuclear power plants. Pressurised water reactors designed at the outset for civil use would evolve in a different direction to submarine reactors. For a submarine, reactors need to be highly compact, self-contained, and should be able to go for long periods without being refuelled, all of which dictates the use of highly enriched – essentially weapons grade – uranium. In civil use, to have any chance of being economic, uranium at much lower enrichment levels must be used, but designs can be physically bigger, and refuelling can be more frequent. By the 1960’s, Westinghouse was able to export civil PWRs to countries like Belgium and France, and it was a descendant of this design that was built in the UK at Sizewell B.

Imagined futures, alternative histories, and technological lock-in

The path of technological progress isn’t preordained, but instead finds a route through a garden of forking paths, where at each branch point the choice is constrained by previous decisions, and is influenced by uncertain guesses about where each of the different paths might lead.

So it’s a profound mistake to suppose that in choosing between different technological approaches to nuclear power, it is simply a question of choosing between a menu of different options. The choice depends on history – a chain of previous choices which have established which potential technological paths have been pursued and which ones have been neglected. It’s this that establishes what comprises the base of technological capability and underpinning knowledge – both codified and tacit – that will be exploited in the new technology. It depends on the existence of a wider infrastructure. A national nuclear programme comprises a system, which could include uranium enrichment facilities, fuel manufacturing, plutonium separation and other waste handling facilities – and, as we’ve seen, the scope of that system depends not just on a nation’s ambitions for civil nuclear power, but on its military ambitions and its weapons programme. And it depends on visions of the future.

In the early years of the Cold War, those visions were driven by paranoia, and a not unjustified fear of apocalypse. The McMahon act of 1946 had shut the UK out of any collaboration on nuclear weapons with the USA; the Soviet Union had demonstrated an atom bomb in 1949, following up in 1955 with a thermonuclear weapon in the megaton range. The architects of the UK nuclear programme – the engineer Christopher Hinton, and physicists William Penney and John Cockcroft, drove it forward with huge urgency. Achievements like delivering Calder Hall in just 4 years were remarkable – but achieved at the cost of cut corners and the accumulation of massive technical debt. We are still living with the legacy of that time – for example, in the ongoing, hugely expensive, clean-up of the nuclear waste left over in Sellafield from that period.

Energy worries dominated the 1970s, nationally and internationally. Conflicts in the Middle East led to an oil embargo and a major spike in the price of oil. The effect of this was felt particularly strongly in the USA, where domestic oil production had peaked in 1970, giving rise to fundamental worries about the worldwide exhaustion of fossil fuels. In the UK, industrial action in the coal mining industry led to rolling power cuts and a national three day week; the sense of national chaos leading to the fall of the Heath government. Fuel prices of all kinds – oil, coal and gas – seemed to be inexorably rising. For energy importers – and the UK was still importing around half its energy in the early 1970’s – security of energy supplies suddenly seemed fragile. In this environment, there was a wide consensus that the future of energy was nuclear, with major buildouts of nuclear power carried out in France, Germany, Japan and the USA.

By the 1990s, things looked very different. In the UK, the exploitation of North Sea oil and gas had turned the UK from an energy importer to an energy exporter. All aspects of fossil fuel energy generation and distribution had been privatised. In this world of apparent energy abundance, energy was just another commodity whose supply could safely be left to the market. And in an environment of high interest rates and low fuel prices, there was no place in the market for nuclear energy.

But if decisions about the technological directions are driven by visions of the future, they are constrained by the past. What is possible is determined by the infrastructure that’s been built already – uranium enrichment plants, reprocessing facilities, and so on. The nature of the stock of knowledge acquired in past R&D programmes will be determined by the problems that had emerged during those programmes, so starting work on a different class of reactors would render that knowledge less useful and necessitate new, expensive programmes of research. The skills and expertise that have been developed in past programmes – whether that is in the understanding of reactor physics that is needed to run them efficiently, or in the construction and manufacturing techniques to build them cheaply effectively – will be specific to the particular technologies that have been implemented in the past.

All this contributes to what is called “technological lock-in”. It isn’t obvious that the first class of power reactor ever developed – the pressurised water reactor – must be the optimum design, out of the large space of possible reactor types, particularly as it was originally designed for a different application – powering submarines – to the one it ended up being widely implemented for – generating power in static, civil power stations.

The UK’s decision to choose the Advanced Gas Cooled Reactor

So why did the UK’s state technocrats make the decision to roll out Advanced Gas Cooled reactors – and having made that decision, why did it take so long to reverse it? The straightforward answer is that this was another case of technological lock-in – the UK had developed an expertise in gas-cooled reactors which was genuinely world-leading, as a result of its decision in the Magnox programme to merge the goals of generating electricity and producing military plutonium. I believe there was a real conviction that the gas-cooled reactor was technically superior to the light-water designs, coupled with a degree of pride that this was an area that the UK had led in. As a UKAEA expert on gas-cooled reactors wrote in 1983, “Few other countries had the skills or resources to pioneer [gas-cooled reactors]; the easy option of the light water reactor developed by someone else has been irresistible”.

There were specific reasons to favour the AGR over PWRs – in particular, in the UK programmes there were worries about the safety of PWRs. These were particularly forcefully expressed by Sir Alan Cottrell, an expert on metallurgy and its applications in the nuclear industry, who was government Chief Scientific Advisor between 1971 and 1974. Perhaps, after Three Mile Island and Fukushima, one might wonder whether these worries were not entirely misplaced.

Later in the programme, while there may have been some agreement from its proponents that the early AGR building programme hadn’t gone well, there was a view that the teething problems had been more or less ironed out. I haven’t managed to find an authoritative figure for the final cost of the later AGR builds, but in 1980 it was reported in parliament that Torness was on track to be delivered at a budget of £1.1 bn (1980 prices), which is not a great deal different from the final cost of the Sizewell B PWR. Torness, like Sizewell B, took 8 years to build.

But I wonder whether the biggest factor in the UK’s nuclear establishment’s preference for the AGR over the PWR was from a sense that the AGR represented another step on a continuing path of technological progress, while the PWR was a mature technology whose future was likely to consist simply of incremental improvements. Beyond the AGRs, the UK’s nuclear technologists could look to the next generation of high temperature reactors, whose prototype – Dragon, at Winfrith – was already in operation, with the fast breeder reactor promising effectively unlimited fuel for a nuclear powered future. But that future was foreclosed by the final run-down of the UK’s nuclear programme in the 80s and 90s, driven by the logic of energy privatisation and cheap North Sea gas.

In the third and final part of this series, I will consider how this history has constrained the UK’s faltering post 2008 effort to revive a nuclear power industry, and what the future might hold.

Sources

For the history of the UK’s nuclear programme, both civil and military, I have relied heavily on: An Atomic Empire: A Technical History Of The Rise And Fall Of The British Atomic Energy Programme, by Charles Hill (2013)

Churchill’s Bomb, by Graham Farmelo (2013) is very illuminating on the early history of the UK’s atomic weapons programme, and on the troubled post-war nuclear relationship between the UK and USA.

On the technical details of nuclear reactors, Nuclear power technology. Volume 1. Reactor technology, edited by Walter Marshall (OUP, 1983) is still very clear. Marshall was Chair of the UK Atomic Energy Authority, then Chief Executive of the Central Electricity Generating Board, and most of the contributors worked for the UKAEA, so in addition to its technical value, the tone of the book gives some flavour of the prevailing opinion in the UK nuclear industry at the time.

On Sir Alan Cottrell’s opposition to PWRs on safety grounds, see his biographical memoir. This also provides an interesting glimpse at how intimately linked the worlds of academia, government scientific advice, and the UK’s nuclear programme (with the occasional incursion by Royalty) were in the 1960s and 70s.

Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

Implications of Rachel Reeves’s Mais Lecture for Science & Innovation Policy

There will be a general election in the UK this year, and it is not impossible (to say the least) that the Labour opposition will form the next government. What might such a government’s policies imply for science and innovation policy? There are some important clues in a recent, lengthy speech – the 2024 Mais Lecture – given by the Shadow Chancellor of the Exchequer, Rachel Reeves, in which she sets out her economic priors.

In the speech, Reeves sets out in her view, the underlying problems of the UK economy – slow productivity growth leading to wage stagnation, low investment levels, poor skills (especially intermediate and technical) and “vast regional disparities, with all of England’s biggest cities outside London having productivity levels below the national average”. I think this analysis is now approaching being a consensus view – see, for example, this recent publication – The Productivity Agenda – from The Productivity Institute.

Interestingly, Reeves resists the temptation to blame everything on the current government, stressing that this situation reflects long-standing weaknesses, which began in the early 1990’s, which were not sufficiently challenged by the Labour governments of the late 90’s and 00’s, and then were made much worse in the 2010’s by Austerity, Brexit, and post-pandemic policy instability. Singling out Conservative Chancellor of the Exchequer Nigel Lawson as the author of policies that were both wrong in principle and badly executed, she identifies this period as the root of “an unprecedented surge in inequality between places and people which endures today. The decline or disappearance of whole industries, leaving enduring social and economic costs and hollowing out our industrial strength. And – crucially – diminishing returns for growth and productivity.”

To add to our problems, Reeves stresses that the external environment the UK now faces is much more challenging than in previous decades, with geopolitical instability reviving the basic question of national security, uncertainties from new technologies like AI, and the challenges of climate instability and the net zero energy transition. She is blunt in saying “globalisation, as we once knew it, is dead”“a growth model reliant on geopolitical stability is a growth model resting on increasingly shallow foundations.”

What comes next? For Reeves, the new questions are “how Britain can pay its way in the world; of our productive capacity; of how to drive innovation and diffusion throughout our economy; of the regional distribution of work and opportunity; of how to mobilise investment, develop skills and tackle inefficiencies to modernise a sclerotic economy; and of energy security”, and the answers are to be found what economist Dani Rodrik calls “productivism”.

In practise, this means an industrial strategy which, recognising the limits of central government’s information and capacity to act, works in partnership. This needs to have both a sector focus – building on the UK’s existing areas of comparative advantage and its strategic needs – and a regional focus, working with local and regional government to support the development of clusters and the realisation of agglomeration benefits.

In terms of the mechanics of the approach, Reeves anticipates that this central mission of government – restoring economic growth – will be driven from the Treasury, through a a beefed up “Enterprise and Growth” unit. To realise these ambitions, she identifies three areas of focus – recreating macroeconomic stability, investment – particularly in partnership with the private sector, and reform – of the planning system, housing, skills, the labour market and regional governance.

Innovation is a central part of Reeves’s vision for increased investment, partly through the familiar call for more capital to flow to university spin-outs. But there is also a call for more focus on the diffusion of new technologies across the whole economy, including what Reeves has long called the “everyday economy”. In my view, this is correct, but will need new institutions, or the adaptation of existing ones (as I argued, with Eoin O’Sullivan: “What’s missing in the UK’s R&D landscape – institutions to build innovation capacity”). There is a very sensible commitment to a ten year funding cycle for R&D institutions, essential not least because some confidence in the longevity of programmes is essential to give the private sector the confidence to co-invest.

This was quite a dense speech, and the commentary around it – including the pre-briefing from Labour – was particularly misleading. I think it would be a mistake to underestimate how much of a break it represents from the conventional economic wisdom of the past three decades, though the details of the policy programme remain to be filled in, and, as many have commented, its implementation in a very tough fiscal environment is going to be challenging. Our current R&D landscape isn’t ideally configured to support these aspirations and the UK’s current challenges (as I argue in my long piece “Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape”); I’d anticipate some reshaping to support the “missions” that are intended to give some structure to the Labour programme. And, as Reeves says unequivocally, of these missions, the goal of restoring productivity and economic growth is foundational.

Optical fibres and the paradox of innovation

Here is one of the foundational papers for the modern world – in effect, reporting the invention of optical fibres. Without optical fibres, there would be no internet, no on-demand video – and no globalisation, in the form we know it, with the highly dispersed supply chains that cheap and reliable information transmission between nations and continents that optical fibres make possible. This won a Nobel Prize for Charles Kao, a HK Chinese scientist then working in STL in Essex, a now defunct corporate laboratory.

Optical fibres are made of glass – so, ultimately, they come from sand – as Ed Conway’s excellent recent book, “Material World” explains. To make optical fibres a practical proposition needed lots of materials science to make glass pure enough to be transparent over huge distances. Much of this was done by Corning in the USA.

Who benefitted from optical fibres? The value of optical fibres to the world economy isn’t fully captured by their monetary value. Like all manufactured goods, productivity gains have driven their price down to almost negligible levels.

At the moment, the whole world is being wired with optical fibres, connecting people, offices, factories to superfast broadband. Yet, the the world trade in optical fibres is worth just $11 bn, less than 0.05% of total world trade. This is characteristic of that most misunderstood phenomenon in economics, Baumol’s so-called “cost disease”.

New inventions successively transform the economy, while innovation makes their price fall so far that, ultimately, in money terms they are barely detectable in GDP figures. Nonetheless,society benefits from innovations, taken for granted through ubiquity & low cost. (An earlier blog post of mine illustrates how Baumol’s “cost disease” works through a toy model)

To have continued economic growth, we need to have repeated cycles of invention & innovation like this. 30 years ago, corporate labs like STL were the driving force behind innovations like these. What happened to them?

Standard Telecommunication Laboratories in Harlow was the corporate lab of STC, Standard Telephones and Cables, a subsidiary of ITT, with a long history of innovation in electronics, telephony, radio coms & TV broadcasting in the UK. After a brief period of independence from 1982, STC was bought by Nortel, Canadian descendent of the North American Bell System. Nortel needed a massive restructuring after late 90’s internet bubble, & went bankrupt in 2009. The STL labs were demolished & are now a business park

The demise of Standard Communication Laboratories just one example of the slow death of UK corporate laboratories through the 90’s & 00’s, driven by changing norms in corporate governance and growing short-termism. These were well described in the 2012 Kay review of UK Equity Markets and Long-Term Decision Making. This has led, in my opinion, to a huge weakening of the UK’s innovation capacity, whose economic effects are now becoming apparent.

Science and Innovation in the 2023 Autumn Statement

On the 22nd November, the Government published its Autumn Statement. This piece, published in Research Professional under the title Economic clouds cast gloom over the UK’s ambitions for R&D, offers my somewhat gloomy perspective on the implications of the statement for science and innovation.

This government has always placed a strong rhetorical emphasis on the centrality of science and innovation in its plans for the nation, though with three different Prime Ministers, there’ve been some changes in emphasis.

This continues in the Autumn Statement: a whole section is devoted to “Supporting the UK’s scientists and innovators”, building on the March 2023 publication of a “UK Science and Technology Framework”, which recommitted to increasing total public spending on research to £20 billion in FY 2024/25. But before going into detail on the new science-related announcements in the Autumn Statement, let’s step back to look at the wider economic context in which innovation strategy is being made.

There are two giant clouds in the economic backdrop the Autumn Statement. One is inflation; the other is economic growth – or, to be more precise, the lack of it.

Inflation, in some senses, is good for governments. It allows them to raise taxes without the need for embarrassing announcements, as people’s cost-of-living wage rises take them into higher tax brackets. And by simply failing to raise budgets in line with inflation, public spending cuts can be imposed by default. But if it’s good for governments, it’s bad for politicians, because people notice rising prices, and they don’t like it. And the real effect of stealth public spending cuts do, nonetheless, materialise.

The effect of the inflation we’ve seen since 2021 is a rise in price levels of around 20%; while the inflation rate peak has surely passed, prices will continue to rise. We can already see the effect on the science budget. Back in 2021, the Comprehensive Spending Review announced a significant increase in the overall government research budget, from £15 billion to £20 billion in 24/25. By next year, though, the effect of inflation will have been to erode that increase in real terms, from £5 billion to less than £2 billion in 2021 money. The effect on Core Research is even more dramatic; in effect inflation will have almost totally wiped out the increase promised in 2021.

Our other problem is persistent slow economic growth, as I discussed here. The underlying cause of this is the dramatic decrease in productivity growth since the financial crisis of 2008. The consequence is the prospect of two full decades without any real growth in wages, and, for the government, the need to simultaneously increase the tax burden and squeeze public services in an attempt to stabilise public debt.

The detailed causes of the productivity slowdown are much debated, but the root of it seems to be the UK’s persistent lack of investment, both public and private (see The Productivity Agenda for a broad discussion). Relatively low levels of R&D are part of this. The most significant policy change in the Autumn Statement does recognise this – it is a tax break allowing companies to set the full cost of new plant and machinery against corporation tax. On the government side, though, the plans are essentially for overall flat capital spending – i.e., taking into account inflation, a real terms cut. Government R&D spending falls in this overall envelope, so is likely to be under pressure.

Instead, the government is putting their hopes on the private sector stepping up to fill the gap, with a continuing emphasis on measures such as R&D tax credits to incentivise private sector R&D, and reforms to the pension system – including the “Long-term Investment for Technology and Science (LIFTS)” initiative – to bring more private money into the research system. The ambition for the UK to be a “Science Superpower” remains, but the government would prefer not to have to pay for it.

One significant set of announcements – on the “Advanced Manufacturing Plan” – marks the next phase in the Conservatives’ off-again, on-again relationship with industrial strategy. Commitments to support advanced manufacturing sectors such as aerospace, automobiles and pharmaceuticals, as well as the “Made Smarter” programme for innovation diffusion, are very welcome. The sums themselves perhaps shouldn’t be taken too seriously; the current government can’t bind its successor, whatever its colour, and anyway this money will have to be found within the overall spending envelope produced by the next Comprehensive Spending Review. But it is very welcome that, after the split-up of the Department of Business, Energy and Industrial Strategy, that the successor Department of Business and International Trade still maintains an interest in research and innovation in support of mainstream business sectors, rather than assuming that is all now to be left to its sister Department of Science, Innovation and Technology.

For all the efforts to create a tax-cutting headline, the economic backdrop for this Autumn statement is truly grim. There is no rosy scenario for the research community to benefit from; the question we face instead is how to fulfil the promises we have been making that R&D can indeed lead to productivity growth and economic benefit.

Productivity and artificial intelligence

To scientists, machine learning is a relatively old technology. The last decade has seen considerable progress, both as a result of new techniques – back propagation & deep learning, and the transformers algorithm – and massive investment of private sector resources, especially computing power. The result has been the striking and hugely publicised success of large language models.

But this rapid progress poses a paradox – for all the technical advances over the last decade, the impact on productivity growth has been undetectable. The productivity stagnation that has been such a feature of the last decade and a half continues, with all the deleterious effects that produces in flat-lining living standards and challenging public finances. The situation is reminiscent of an earlier, 1987, comment by the economist Robert Solow: “You can see the computer age everywhere but in the productivity statistics.”

There are two possible resolutions of this new Solow paradox – one optimistic, one pessimistic. The pessimist’s view is that, in terms of innovation, the low-hanging fruit has already been taken. In this perspective – most famously stated by Robert Gordon – today’s innovations are actually less economically significant than innovations of previous eras. Compared to electricity, Fordist manufacturing systems, mass personal mobility, antibiotics, and telecoms, to give just a few examples, even artificial intelligence is only of second order significance.

To add further to the pessimism, there is a growing sense that the process of innovation itself is suffering from diminishing returns – in the words of a famous recent paper: “Are ideas getting harder to find?”.

The optimistic view, by contrast, is that the productivity gains will come, but they will take time. History tells us that economies need time to adapt to new general purpose technologies – infrastructures & business models need to be adapted, and the skills to use them need to be spread through the working population. This was the experience with the introduction of electricity to industrial processes – factories had been configured around the need to transmit mechanical power from central steam engines through elaborate systems of belts and pulleys to the individual machines, so it took time to introduce systems where each machine had its own electric motor, and the period of adaptation might even involve a temporary reduction in productivity. Hence, one might expect a new technology to follow a J-shaped curve.

Whether one is an optimist or a pessimist, there are a number of common research questions that the rise of artificial intelligence raises:

  • Are we measuring productivity right? How do we measure value in a world of fast moving technologies?
  • How do firms of different sizes adapt to new technologies like AI?
  • How important – and how rate-limiting – is the development of new business models in reaping the benefits of AI?
  • How do we drive productivity improvements in the public sector?
  • What will be the role of AI in health and social care?
  • How do national economies make system-wide transitions? When economies need to make simultaneous transitions – for example net zero and digitalisation – how do they interact?
  • What institutions are needed to support the faster and wider diffusion of new technologies like AI, & the development of the skills needed to implement them?
  • Given the UK’s economic imbalances, how can regional innovation systems be developed to increase absorptive capacity for new technologies like AI?

A finer-grained analysis of the origins of our productivity slowdown actually deepens the new Solow paradox. It turns out that the productivity slowdown has been most marked in the most tech-intensive sectors. In the UK, the most careful decomposition similarly finds that it’s the sectors normally thought of as most tech intensive that have contributed to the slowdown – transport equipment (i.e., automobiles and aerospace), pharmaceuticals, computer software and telecoms.

It’s worth looking in more detail at the case of pharmaceuticals to see how the promise of AI might play out. The decline in productivity of the pharmaceutical industry follows several decades in which, globally, the productivity of R&D – expressed as the number of new drugs brought to market per $billion of R&D – has been falling exponentially.

There’s no clearer signal of the promise of AI in the life sciences than the effective solution of one of the most important fundamental problems in biology – the protein folding problem – by Deepmind’s programme AlphaFold. Many proteins fold into a unique three dimensional structure, whose precise details determine its function – for example in catalysing chemical reactions. This three-dimensional structure is determined by the (one-dimensional) sequence of different amino acids along the protein chain. Given the sequence, can one predict the structure? This problem had resisted theoretical solution for decades, but AlphaFold, using deep learning to establish the correlations between sequence and many experimentally determined structures, can now predict unknown structures from sequence data with great accuracy and reliability.

Given this success in an important problem from biology, it’s natural to ask whether AI can be used to speed up the process of developing new drugs – and not surprising that this has prompted a rush of money from venture capitalists. One of the most high profile start-ups in the UK pursuing this is BenevolentAI, floated on the Amsterdam Euronext market in 2021 with €1.5 billion valuation.

Earlier this year, it was reported that BenevolentAI was laying off 180 staff after one of its drug candidates failed in phase 2 clinical trials. Its share price has plunged, and its market cap now stands at €90 million. I’ve no reason to think that BenevolentAI is anything but a well run company employing many excellent scientists, and I hope it recovers from these setbacks. But what lessons can be learnt from this disappointment? Given that AlphaFold was so successful, why has it been harder than expected to use AI to boost R&D productivity in the pharma industry?

Two factors made the success of AlphaFold possible. Firstly, the problem it was trying to solve was very well defined – given a certain linear sequence of amino acids, what is the three dimensional structure of the folded protein? Secondly, it had a huge corpus of well-curated public domain data to work on, in the form of experimentally determined protein structures, generated through decades of work in academia using x-ray diffraction and other techniques.

What’s been the problem in pharma? AI has been valuable in generating new drug candidates – for example, by identifying molecules that will fit into particular parts of a target protein molecule. But, according to pharma analyst Jack Scannell [1], it isn’t identifying candidate molecules that is the rate limiting step in drug development. Instead, the problem is the lack of screening techniques and disease models that have good predictive power.

The lesson here, then, is that AI is very good at the solving the problems that it is well adapted for – well posed problems, where there exist big and well-curated datasets that span the problem space. Its contribution to overall productivity growth, though, will depend on whether those AI-susceptible parts of the overall problem are in fact the rate-limiting steps.

So how is the situation changed by the massive impact of large language models? This new technology – “generative pre-trained transformers” – consists of text prediction models based on establishing statistical relationships between the words found in a massively multi-parameter regression over a very large corpus of text [3]. This has, in effect, automated the production of plausible, though derivative and not wholly reliable, prose.

Naturally, sectors for which this is the stock-in-trade feel threatened by this development. What’s absolutely clear is that this technology has essentially solved the problem of machine translation; it also raises some fascinating fundamental issues about the deep structure of language.

What areas of economic life will be most affected by large language models? It’s already clear that these tools can significantly speed up writing computer code. Any sector in which it is necessary to generate boiler-plate prose, in marketing, routine legal services, and management consultancy is likely to be affected. Similarly, the assimilation of large documents will be assisted by the capabilities of LLMs to provide synopses of complex texts.

What does the future hold? There is a very interesting discussion to be had, at the intersection of technology, biology and eschatology, about the prospects for “artificial general intelligence”, but I’m not going to take that on here, so I will focus on the near term.

We can expect further improvements in large language models. There will undoubtedly be improvements in efficiencies as techniques are refined and the fundamental understanding of how they work is improved. We’ll see more specialised training sets, that might improve the (currently somewhat shaky) reliability of the outputs.

There is one issue that might prove limiting. The rapid improvement we’ve seen in the performance of large language models has been driven by exponential increases in the amount of computer resource used to train the models, with empirical scaling laws emerging to allow extrapolations. The cost of training these models is now measured in $100 millions – with associated energy consumption starting to be a significant contribution to global carbon emissions. So it’s important to understand the extent to which the cost of computer resources will be a limiting factor on the further development of this technology.

As I’ve discussed before, the exponential increases in computer power given to us by Moore’s law, and the corresponding decreases in cost, began to slow in the mid-2000’s. A recent comprehensive study of the cost of computing by Diane Coyle and Lucy Hampton puts this in context [2]. This is summarised in the figure below:

The cost of computing with time. The solid lines represent best fits to a very extensive data set collected by Diane Coyle and Lucy Hampton; the figure is taken from their paper [2]; the annotations are my own.

The highly specialised integrated circuits that are used in huge numbers to train LLMs – such as the H100 graphics processing units designed by NVIdia and manufactured by TSMC that are the mainstay of the AI industry – are in a regime where performance improvements come less from the increasing transistor densities that gave us the golden age of Moore’s law, and more from incremental improvements in task-specific architecture design, together with simply multiplying the number of units.

For more than two millennia, human cultures in both east and west have used capabilities in language as a signal for wider abilities. So it’s not surprising that large language models have seized the imagination. But it’s important not to mistake the map for the territory.

Language and text are hugely important for how we organise and collaborate to collectively achieve common goals, and for the way we preserve, transmit and build on the sum of human knowledge and culture. So we shouldn’t underestimate the power of tools which facilitate that. But equally, many of the constraints we face require direct engagement with the physical world – whether that is through the need to get the better understanding of biology that will allow us to develop new medicines more effectively, or the ability to generate abundant zero carbon energy. This is where those other areas of machine learning – pattern recognition, finding relationships within large data sets – may have a bigger contribution.

Fluency with the written word is an important skill in itself, so the improvements in productivity that will come from the new technology of large language models will arise in places where speed in generating and assimilating prose are the rate limiting step in the process of producing economic value. For machine learning and artificial intelligence more widely, the rate at which productivity growth will be boosted will depend, not just on developments in the technology itself, but on the rate at which other technologies and other business processes are adapted to take advantage of AI.

I don’t think we can expect large language models, or AI in general, to be a magic bullet to instantly solve our productivity malaise. It’s a powerful new technology, but as for all new technologies, we have to find the places in our economic system where they can add the most value, and the system itself will take time to adapt, to take advantage of the possibilities the new technologies offer.

These notes are based on an informal talk I gave on behalf of the Productivity Institute. It benefitted a lot from discussions with Bart van Ark. The opinions, though, are entirely my own and I wouldn’t necessarily expect him to agree with me.

[1] J.W. Scannell, Eroom’s Law and the decline in the productivity of biopharmaceutical R&D,
in Artificial Intelligence in Science Challenges, Opportunities and the Future of Research.

[2] Diane Coyle & Lucy Hampton, Twenty-first century progress in computing.

[3] For a semi-technical account of how large language models work, I found this piece by Stephen Wolfram very helpful: What is ChatGPT doing … and why does it work?

Should Cambridge double in size?

The UK’s economic geography, outside London, is marked by small, prosperous cities in the south and east, and large, poor cities everywhere else. This leads to a dilemma for policy makers – should we try and make the small, successful, cities, bigger, or do the work needed to make our big cities more successful? The government’s emphasis seems to have swung back to expanding successful places in the South and East, with a particular focus on Cambridge.

Cambridge is undoubtedly a great success story for the UK, and potentially a huge national asset. Decades of investment by the state in research has resulted in an exemplary knowledge-based economy, where that investment in public R&D attracts in private sector R&D in even greater proportion. Cambridge has expanded recently, developing a substantial life science campus around the south of the city, moving engineering and physical sciences research to the West Cambridge site, and developing a cluster of digital businesses around the station. But its growth is constrained by poor infrastructure (water being a particular problem), aesthetic considerations in a historic city centre (which effectively rule out high rise buildings), and the political barriers posed by wealthy and influential communities who oppose growth.

We need an economic reality check too. How much economic difference would it make, on a national scale, if Cambridge did manage to double in size – and what are the alternatives? Here’s a very rough stab at some numbers.

The gross value added per person in Cambridge was £49,000 in 2018, well above the UK average of £29,000 [1]. In Greater Manchester, by contrast, GVA per person was about £25,000, well below the UK average. This illustrates the’s UK unusual and sub-optimal economic geography – in most countries, it’s the big cities that drive the economy. In contrast, in the UK, big second tier cities, like Manchester, Birmingham, Leeds and Glasgow, underperform economically and in effect drag the economy down.

Let’s do the thought experiment where we imagine Cambridge doubles its population, from 126,000 to 252,000, taking those people from Greater Manchester’s population of 2.8 million, and assuming that they are able to add the same average GVA per person to the Cambridge economy. Since the GVA per head in Cambridge is so much higher than in GM, this would raise national GVA by about £3 billion.

In the overall context of the UK’s economy, with a total GVA of £1,900 billion, £3 billion doesn’t make a material difference. The trouble with small cities is that they are small – so, no matter how successful economically they are, even doubling their size doesn’t make much of an impact at a national scale.

As an alternative to doubling the size of Cambridge, we could raise the productivity of Greater Manchester. To achieve a £3 billion increase in GM’s output, we’d need to raise the GVA per person by just over 4.2%, to a bit more than £26,000 – still below the UK average.

That’s the importance of trying to raise the productivity of big cities – they are big. Relatively marginal improvements in productivity in Greater Manchester, Leeds, Birmingham and the West Midlands, Sheffield, Glasgow and Cardiff could cumulatively start to make a material difference to the economy on a national scale. And we know where those improvements need to be made – for example in better public transport, more R&D and support for innovative businesses, providing the skills that innovative businesses need, by addressing poor housing and public health.

I do think Cambridge should be encouraged and supported to expand, to accommodate the private sector businesses that want to take advantage of the public investment in R&D that’s happened there, and to give the people they need to work for them somewhere affordable to live.

But, as Tom Forth and I have argued in detail elsewhere, we need more centres of R&D and innovation outside the Greater Southeast, particularly in those places where the private sector already makes big investments in R&D that aren’t supported by the public sector. The government has already made a commitment, in the Levelling Up White Paper, to increase public investment in R&D outside the Greater Southeast by a third by 2025. That commitment needs to be delivered, and built on by the next government.

Finally, we should ask ourselves whether we are fully exploiting the great assets that have been built in Cambridge, not just to support the economy of a small city in East Anglia, but to drive the economy of the whole nation. How could we make sure that if a Cambridge semiconductor spin-out is expanding, it builds its factory in Newport, Gwent, rather than Saxony or Hsinchu? How can we use the huge wealth of experience in the Cambridge venture capital community to support nascent VC sectors in places like Leeds? How could we make sure a Cambridge biotech spin-out does its clinical trials in Greater Manchester [2], and then then manufactures its medicine in Cheshire or on Merseyside?

Two things are needed to make this happen. Firstly, we need place-based industrial strategies to build the innovation, skills and manufacturing capacity in relevant sectors in other parts of the UK, so these places have the absorptive capacity to make the most of innovations emerging from Cambridge. Then, we need to build institutional links between the key organisations in Cambridge and those in other emerging regional centres. In this way, we could take full advantage of Cambridge’s position as a unique national asset.

[1]. Data here is taken from the ONS’s Regional Gross Value Added (balanced) dataset and mid-year population estimates, in both cases using 2018 data. The data for local authority areas on a workplace basis, but populations are for residents. This probably flatters the productivity number for Cambridge, as it doesn’t take account of people who live in neighbouring areas and commute into the city.

At another limit, one could ask what would happen if you doubled the population of the whole county of Cambridgeshire, 650,000. As the GVA per head at the county level is £31.5k, quite a lot less than the figure for Cambridge city, this makes surprisingly little difference to the overall result – this would increase GVA by £3.15 bn, the same as a 4.2% increase in GM’s productivity.

Of course, this poses another question – why the prosperity of Cambridge city doesn’t spill over very far into the rest of the county. Anyone who regularly uses the train from Cambridge via Ely and March to Peterborough might have a theory about that.

[2]. The recent government report on commercial clinical trials in the UK, by Lord O’Shaughnessy, highlighted a drop in patients enrolled in commercial clinical trials in the UK of 36% over the last six years. This national trend has been bucked in Greater Manchester, where there has been an increase of 19% in patient recruitment, driven by effective partnership between the NIHR Greater Manchester Clinical Research Network, the GM devolved health and social care system, industry and academia.

The UK’s crisis of economic growth

Everyone now agrees that the UK has a serious problem of economic growth – or lack of it – even if opinions differ about its causes, and what we should do about it. Here I’d like to set out the scale of the problem with plots of the key data.

My first plot shows real GDP since 1955. The break in the curve at the global financial crisis around 2007 is obvious. Before 2007 there were booms and busts – but the whole curve is well fit by a trend line representing 2.4% a year real growth. But after the 2008 recession, there was no return to the trend line. Growth was further interrupted by the covid pandemic, and the recovery from the pandemic has been slow. The UK’s GDP is now about 18% lower than it would have been if the economy had returned to its pre-recession trend line.


UK real GDP. Chained volume measure, base year 2019. ONS: 30 June 2023 release.

Total GDP is of particular interest to HM Treasury, as it is the overall size of the economy that determines the sustainability of the national debt. But you can grow an economy by increasing the size of the population, and, from the point of view of the sustainability of public services and a wider sense of prosperity, GDP per capita is a better measure.

My second plot shows real GDP capita. GDP per person has risen less fast than total GDP, both before and after the global financial crisis, reflecting the fact that the UK’s population has been growing. Trend growth before the break was 2.1% per annum; once again, contrary to all previous experience in the post-war period, per capita GDP growth has never recovered to the pre-crisis trend line. The gap with the previous trend, 25%, or £10,900 per person, is perhaps the best measure of the UK’s lost prosperity.


UK real GDP per capita. Chained volume measure, base year 2019. ONS: 12 May 2023 release.

The most fundamental measure of the productive capacity of the economy is, perhaps, labour productivity, defined as the GDP per hour worked. One can make GDP per capita grow by people working more hours, or by having more people enter the labour market. In the late 2010s, this was a significant factor in the growth of GDP per capita, but since the pandemic this effect has gone into reverse, with more people leaving the labour market, often due to long-term ill-health.

My third plot shows UK labour productivity. This shows the fundamental and obvious break in productivity performance that, in my view, underlies pretty much everything that’s wrong with the UK’s economy – and indeed its politics. As I discussed in more detail in my previous post,“When did the UK’s productivity slowdown begin?”, I increasingly suspect that this break predates the financial crisis – and indeed that crisis is probably better thought as an effect, rather than a cause, of a more fundamental downward shift in the UK’s capacity to generate economic growth.


UK labour productivity, whole economy. Chained volume measure, index (2019=100). ONS: 7 July 2023 release.

Talk of GDP growth and labour productivity may seem remote to many voters, but this economic stagnation has direct effects, not just on the affordability of public services, but on people’s wages. My final plot shows average weekly earnings, corrected for inflation. The picture is dismal – there has essentially been no rise in real wages for more than a decade. This, at root, is why the UK”s lack of economic growth is only going to grow in political salience.


UK Average weekly earnings, 2015 £s, corrected for inflation with CPI. ONS: 11 July 2023 release.

I’ve written a lot about the causes of the productivity slowdown and possible policy options to address it, reflecting my own perspectives on the importance of innovation and on redressing the UK’s regional economic imbalances. Here I just make two points.

On diagnosis, I think it’s really important to note the mid-2000s timing of the break in the productivity curve. Undoubtedly subsequent policy mistakes have made things worse, but I believe a fundamental analysis of the UK’s problems must recognise that the roots of the crisis go back a couple of decades.

On remedies, I think it should be obvious that if we carry on doing the same sorts of things in the same way, we can expect the same results. Token, sub-scale interventions will make no difference without a serious rethinking of the UK’s fundamental economic model.

When did the UK’s productivity slowdown begin?

The UK is now well into a second decade of sluggish productivity growth, with far-reaching consequences for people’s standard of living, for the sustainability of public services, and (arguably) for the wider political environment. It has become usual to date the beginning of this new period of slow productivity growth to the global financial crisis around 2008, but I increasingly suspect that the roots of the malaise were already in place earlier in the 2000s.


UK Labour productivity. Data: ONS, Output per hour worked, chained volume measure, 7 July 2023 release. Fit: non-linear least squares fit to two exponential growth functions, continuous at break point. Best fit break point is 2004.9.

My plot shows the latest release of whole-economy quarterly productivity data from the ONS. I have fitted the data to a function representing two periods of exponential growth, with different time constants, constrained to be continuous at a time of break. There are four fitting parameters in this function – the two time constants, the level at the break point, and the time of break. My best fit shows a break point at 2004.9.


Residuals for the fit to the quarterly productivity data shown above.

The plot of the residuals to the fit is shown above. This shows that the goodness of fit is comparable across the whole time range (with the exception of the spikes representing the effect of the pandemic). There are deviations from the fit corresponding to the effect of booms and recessions, but the deviations around the time of the Global financial crisis are comparable with those in earlier boom/bust cycles.

How sensitive is the fit to the timing of the break point? I’ve redone the fits constraining the year of the break point, and calculated at each point the normalised chi-squares (i.e. the sum of the squared differences between data and model, divided by the number of data points). This is shown below.


Normalised chi-squared – i.e. sum of the squares of the differences between productivity data and the two exponential model, for fits where the time of break is constrained.

The goodness of fit varies smoothly around an optimum value of the time of break near 2005. A time of break at 2008 produces a materially worse quality of fit.

Can we quantify this further and attach a probability distribution to the year of break? I don’t think so using this approach – we have no reason to suppose that the deviations between model and fit are drawn from a Gaussian, which would be the assumption underlying traditional approaches to ascribing confidence limits to the fitting parameters. I believe there are Bayesian approaches to addressing this problem, and I will look into those for further work.

But for now, this leaves us with a hypothesis that the character of the UK economy, and the global context in which it operated, had already made the transition to a low productivity growth state by the mid-2000’s. In this view, the financial crisis was a symptom, not a cause, of the productivity slowdown.