Revisiting the UK’s nuclear AGR programme: 3. Where next with the UK’s nuclear new build programme? On rebuilding lost capabilities, and learning wider lessons

This is the third and concluding part of a series of blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects.

In the second post, “What led to the AGR decision? On nuclear physics – and nuclear weapons” I turned to consider the technical and political issues that led to this decision.

In this post, I bring the story up to date, discussing why post-2010 plans for new nuclear build have largely failed, and look to the future, with new ambitions for small modular reactors – and, ironically, a potential return to high temperature, gas cooled reactors that represent an evolution of the AGR.

Into the 2010’s and beyond – the UK’s failed Nuclear New Build programme

In the early 2010’s, the Coalition Government developed an ambitious plan to replace the UK’s ageing nuclear fleet, with new light water reactors to be built on the existing nuclear sites, involving four different designs from four different vendors. The French state nuclear company was to build 2 of its next generation pressurised water reactors – the European Pressurised Water Reactor (EPR) – at Hinkley, and another 2 at Sizewell. The Chinese state nuclear corporation, CGN would install 2 (or possibly 3) of its own PWR designs at Bradwell. At Moorside, in Cumbria, Toshiba/Westinghouse would build 3 of its AP1000 PWRs. At Wylfa, in North Wales, Hitachi would build two Advanced Boiling Water Reactors, with another two ABWRs to be built at Oldbury. In total this would give 18 GW of new nuclear capacity, producing roughly double the output of the AGR fleet. In 2013, this programme formally got underway, with the announcement of a deal with EDF to deliver the first of these new plants, at Hinkley Point.

This programme has largely failed. A decade on, only one project is under construction – Hinkley Point C, where the best estimate for when the two EPRs will come into service is 2030. The cost for this 3.2 GW capacity is now estimated as being between £31 bn and £34 bn, in 2015 prices, compared to an original estimate of £20 bn. To put this into context, the last nuclear power station built in the UK, the PWR at Sizewell B, cost about £2 bn, in 1987 prices for a 1.2 GW unit. Scaling this to the 3.2 GW capacity of the Hinkley Point project, and accounting for inflation, this would correspond to about £12 bn in 2015 prices. Where has this 250% increase in nuclear construction cost since Sizewell B come from? There are essentially two broad classes of reasons.

Firstly, more recent designs of pressurised water reactor, such as the EPR, or the Westinghouse AP1000, have a number of new safety features, to mitigate some of the fundamental weaknesses of the pressurised water reactor design, particularly its vulnerability to loss of coolant accidents. These new features include methods for passive cooling in the case of loss of power to the main cooling system, a “core catcher” system which contains molten core material in the event of a meltdown, and more robust containment systems, designed to resist, for example, an aircraft crashing into the reactor building. These new features all add unavoidable extra cost.

In addition to these unavoidable cost increases, some of the increase in construction cost must reflect a substantial real reduction in the UK’s ability to deliver a big complex project like a nuclear power station. One would hope that, if subsequent power stations are built to the same design with the construction teams kept in place, in the light of experience, the development of functional supply chains, and the creation of a skilled workforce, these costs could be reduced.

A sister plant to Hinkley Point, at Sizewell, has received a nuclear site license, but awaits a final investment decision. The capital for Hinkley Point C was provided entirely by its investors, which included the French state-owned energy company EDF and the Chinese state nuclear company CGN, in return for a guarantee of a fixed price for the electricity the plant generated over the first 35 years of operation. Thus the cost of the overrun in budget is borne by the investors, not the UK government or UK consumers. The deal was constructed in a way that was very favourable to the investors, so there was some cushion there, but the experience of Hinkley Point C means that it’s now impossible to attract investors to build further power stations on these terms. The financing for Sizewell C, if it goes ahead, will involve more direct UK state investment, as well as payments to the company building it while the reactor is under construction. These up-front payments will be added to electricity consumers’ bills through the so-called “Regulated Asset Base” mechanism, reducing the cost to the company of borrowing money during the long construction period.

So, sixteen years on from the in-principle commitment to return to nuclear power, no plant has yet been completed, and the best that can be hoped for from the plan to build 18 GW of new capacity is that we will have 6.4 GW of capacity from Hinkley C, and Sizewell C, if the latter goes ahead.

Why has the UK’s nuclear new build programme failed so badly? The original plans were misconceived on many levels. The plan to involve the Chinese state so closely seemed naive at the time, and given the changed geopolitical environment since then, it now seems almost unbelievable that a UK government could countenance it. The idea of having multiple competing vendors and designs makes it much more difficult to drive costs down through “learning by doing”; the most successful build-outs of nuclear power – in France and Korea – have relied on “fleet build” – sequential installations of standardised designs. And the reliance on overseas investors and overseas designs meant that the UK had no control over the supply chain, meaning that little of the high value work involved in the programme would benefit the UK economy.

At the root of this failure were the UK government’s unwise ideological commitments to privatised energy markets, making it resist any subsidies for nuclear power, and refuse to issue new government debt to pay for infrastructure. The legacy of the run-down of the UK’s civil nuclear programme in the 1990’s was a lack of significant UK government expertise in the area, making it an uninformed and naive customer, and a lack of an industry in the UK in a position to benefit from the expenditure.

Could there be another way? Since 2014, the UK government has expressed interest in the idea of small modular reactors (SMRs), and has given some support for design studies, with the UK company Rolls-Royce setting up a unit to commercialise them.

Back to the future – hopes for light water small modular reactors

There’s been a seemingly inexorable trend towards larger and larger pressurised water reactors – and, as we have seen at Hinkley C, that trend of increasing size has been accompanied by a dismal record of cost overruns and construction delays. There are, in principle, economies of scale in operating costs to be gained with very large units. But, as I’ve stressed above, the economics of nuclear power is dominated by the upfront capital cost of building reactors in the first place. If one, instead, built multiple smaller reactors, small enough for much of the construction to take place in factories, where manufacturing processes could be optimised over multiple units, one might hope to drive the costs down through “learning by doing”. This is the logic behind the enthusiasm for small modular reactors.

There’s nothing new about a small pressurised water reactors – by the standards of today’s power reactors, Admiral Rickover’s submarine reactors were tiny. Significantly, as I discussed above, the only remaining UK capability in nuclear reactors is to be found in Rolls-Royce, the company that makes reactors for the UK Navy’s submarines. But the design criteria for a submarine reactor and for a power reactor are very different – while the experience of designing and manufacturing submarine reactors will have some general value in the civil sector, the design of a civil small modular reactor will need to be very different to a submarine reactor.

Rolls-Royce is one of five companies currently bidding for a role in a UK civil SMR programme. Its design has currently passed the second of three stages in the process of getting regulatory approval for the UK market. The Rolls-Royce proposal is for a 470 MWe pressurised water reactor, using conventional PWR fuel of low enrichment (in contrast to the very highly enriched fuel used in submarine reactors). The design is entirely new, though technically rather conservative.

A power output of 470 MWe is not, in fact, that small – this is very much in the range of reactor powers of civil PWRs that were being built in the early 1970’s – compare, for example, the VVER-440 reactors built by the USSR and widely installed and operating in the former USSR and Eastern Europe. The Rolls-Royce design, in contrast to the VVER-440s, does include the safety features to be found in the larger, recent PWR designs, including much more robust confinement, “core catcher”, and passive cooling to cope with a loss of coolant accident, and it will incorporate much more modern materials, control systems, and manufacturing technologies.

There have been suggestions that SMRs could be sited more widely across the country, in towns and cities outside regular nuclear sites. This isn’t the plan for any UK SMRs – they are in any case too large for this to make sense. Instead, the idea is to have multiple installations in existing licensed nuclear sites, such as Wylfa and Oldbury. The Rolls-Royce design is currently undergoing the final stage of its generic design approval. It is one of five potential vendors currently participating in a UK government competition for further support towards deployment of a light water small modular reactor in the UK.

The other entrants to the SMR competition are two well-established vendors of large light water reactors – Westinghouse and GE-Hitachi, and two more recent entrants into the market, from the USA – Holtec and NuScale. Since none of these companies has actually delivered an SMR, the decision will have to be made on judgements about capability: experience shows us that there can be no certainty about cost until one has been built. But, in making the decision, the UK government will need to decide how strongly to weight the need to rebuild UK industrial capacity and nuclear expertise against pure “value for money” criteria.

The Next Generation? Advanced Modular Reactors

The light water SMR represents an incremental update of a technology developed in the 1950’s, at a scale that was being widely deployed in the 1970’s. Is it possible to break out from the technological lock-in of the light water reactor, to explore more of the very wide possible design space of possible power reactors? That is the thinking behind the idea of developing an Advanced Modular Reactor – keeping the principle of relatively small scale and factory based modular construction, but using fundamentally different reactor designs, with different combinations of moderator and coolant to achieve technical advantage over the light water reactor. In particular, it would be very attractive to have a reactor that ran at a significantly higher temperature than a light water reactor. A high temperature reactor would have higher conversion efficiency to electrical power, and in addition it might be possible to use the heat directly to drive industrial processes – for example making hydrogen as an energy vector and as a non-oil based feedstock for the petrochemical industry, including to make synthetic hydrocarbons for zero carbon aviation fuel.

We are also seeing a resurgence of interest in reactors using unmoderated (fast) neutrons. This is partly motivated by the possibility of breeding fissile material, thus increasing the efficiency of fuel use, and partly by the fact that fast neutrons can induce fission in the higher actinides that are particularly problematic as contaminants of used nuclear fuel. There’s an attractive symmetry in the idea of using the UK’s very large stock of civil plutonium to “burn up” nuclear waste.

The UK government commissioned a technical assessment of potential candidates for an advanced modular reactor. This considered fast reactors cooled by liquid metals – both sodium and lead, as well as a gas-cooled fast reactor. Another intriguing possibility that has generated recent interest is the molten salt reactor, where the fissile material is dissolved in fluoride salts. Here the molten salt acts both as fuel and coolant. Reactor designs using a thermal neutron spectrum include an evolution of the boiling water reactor which uses water in the supercritical state. All of these designs have potential advantages, but the judgement of the study was that, of these potential designs, only the sodium fast reactor was potentially close enough to deployment to be worth considering.

However, the study made a clear recommendation in favour of a high temperature, gas cooled thermal neutron reactor. Here, the moderator is graphite and the coolant is helium, as in the Advanced Gas Cooled Reactors. The main difference with AGRs is that, in order to operate at higher temperatures, the fuel is presented in spherical particles around a millimetre in diameter, in which uranium oxide is coated with graphite and encapsulated in a high temperature resistant refractory ceramic such as silicon carbide. There is considerable world-wide experience in making this so-called tristructural isotropic (TRISO) fuel, which is able to withstand operating temperatures in the 700 – 850 °C range. Modifications of these fuel particles – for example using zirconium carbide as the outer later – could permit operation at even higher temperatures, high enough to split water into hydrogen and oxygen through purely thermochemical processes. But this would need further research.

A Chronicle of Wasted Time

What’s striking about many of the proposals for an advanced modular reactor is that the concepts are not new. For example, work on sodium cooled fast reactors began in the UK in the 1950s, with a full scale prototype being commissioned in 1974. Lead cooled reactors were built in both the USA and the USSR. Molten salt reactors perhaps represent the most radical design departure, but even here, a working prototype was developed in Oak Ridge National Laboratory, USA, in the 1960s.

One of the reasons for the UK AMR Technical Assessment favouring the High Temperature Gas Reactor is that it builds on the experience of the UK in running a fleet of gas cooled, graphite moderator reactors – the AGRs. In fact, the UK, as part of an international collaboration, operated a prototype high temperature gas reactor between 1964 and 1976 – DRAGON. It was in this project that the TRISO fuel concept was developed, which has since been used in operational high temperature gas reactors in the USA, Germany, Japan and China.

At the peak of the 1970’s energy crisis, from 1974 to 1976, construction began on more than a hundred nuclear reactors across the world. Enthusiasm for nuclear power dwindled throughout the 1980’s, suppressed on the one hand by the experience of nuclear accidents at Three Mile Island and Chernobyl, and on the other by an era of cheap and abundant fossil fuels. In the three years between 1994 to 1996, just three new reactors were begun worldwide. In this climate, there was no appetite for new approaches to nuclear power generation, technology development stagnated, and much tacit knowledge was lost.

Some concluding thoughts

In 1989, the UK’s Prime Minister Margaret Thatcher made an important speech to the United Nations highlighting the importance of climate change. It was her proposal that the work of the Intergovernmental Panel on Climate Change was extended beyond 1992, and that there should be binding protocols on the reduction of greenhouse gases; naturally, given her political perspective, she stressed the importance of generating continued economic growth, and of the importance of private sector industry in driving innovation. She reasserted her support for nuclear power, which she described as “the most environmentally safe form of energy”. As far as the UK was concerned, “we shall be looking more closely at the role of non-fossil fuel sources, including nuclear, in generating energy.”

Since Thatcher’s speech, another thousand billion tonnes of carbon dioxide have been released into the atmosphere from industry and burning fossil fuels, leading to an increase in the atmospheric concentration of CO2 from 350 parts per million in 1989 to 427 ppm now. To be fair, one should recognise that the worldwide nuclear power industry has produced 390,000 tonnes of spent nuclear fuel, producing 29,000 cubic meters of high level waste. This needs to be permanently disposed of in deep geological repositories, the first of which is nearing completion in Finland.

But even as Thatcher was speaking, the expansion of nuclear power was stalling. In the UK it was Thatcher’s own Chancellor of the Exchequer who had in effect killed nuclear power, through the lasting impact of his ideological commitment to privatised energy markets in an environment of cheap fossil fuels.

To be clear, what killed the UK’s nuclear energy programme was not a wrong choice of reactor design; it was a combination of high interest rates and low fossil fuel prices, all in the context of a worldwide retreat from nuclear new build, with a strong anti-nuclear movement, driven by nuclear accidents in Three Mile Island and Chernobyl, by the (correctly) perceived connection between civil nuclear power and nuclear weapons programmes, and by the problem of nuclear waste. The circumstances of the UK were particularly helpful for a continued dependence on fossil fuels; the discovery of North Sea oil and gas gave the UK, now a net energy exporter, a 15 year holiday from having to worry about the geopolitics of energy dependence.

But, for industrial nations, security of access to adequate energy supplies has always been an issue of existential importance, too often driving conflict and war. The Ukrainian war has given us a salutary reminder of the importance of energy supplies to geopolitics. Energy is never just another commodity.

The effective termination of the UK’s civil nuclear programme in the 1990’s undoubtedly saved money in the short-term. That money could have been used for investment – future-proofing the UK’s infrastructure, in supporting R&D to create new technologies. Political choices meant that it wasn’t – this was a period of falling public and private investment – instead it supported consumption. But there were costs, in terms of losing capacity, in industry and the state. Technological regression is possible, and one could argue that this has happened in civil nuclear power. In the UK, we have felt the loss of that capacity now that policy has changed, very directly in the failure of the last decade’s new nuclear build. Energy decisions should never just be about money.

Revisiting the UK’s nuclear AGR programme: 2. What led to the AGR decision? On nuclear physics – and nuclear weapons

This is the second of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government. In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects, and in particular, the influence of an article by Dennis Henderson that was highly critical of the AGR decision. In this post, I go into some detail to try to understand why the decision was made.

According to Thomas Kelsey, writing in his article When Missions Fail: Lessons in “High Technology” from post-war Britain, the decision to choose the Advanced Gas Cooled reactor design for the UK’s second generation reactor programme was forced through by “state technocrats, hugely influential scientists and engineers from the technical branches of the civil service”; sceptics did exist, but they were isolated in different departmental silos, and unable to coordinate their positions to present a compelling counter view.

But why might the scientists and engineers have been so convinced that the AGR was the right way to go, rather than the rival US designed Pressurised Water Reactor, making what Henderson argued, in his highly influential article “Two British Errors: Their Probable Size and Some Possible Lessons”, was one of the UK government’s biggest policy errors? To go some way to answering that, it’s necessary to consider both physics and history.

Understanding the decision to choose advanced gas cooled reactors: the physics underlying nuclear reactor design choices

To start with the physics, what are the key materials that make up a fission reactor, and what influences the choice of materials?

Firstly, one needs a fissile material, which will undergo a chain reaction – a nucleus, that when struck by a neutron, will split, releasing energy, and emitting a handful of extra neutrons, that go on to cause more fission. The dominant fissile material in today’s civil nuclear programmes is Uranium-235, the minority isotope that makes up 0.72% of natural uranium (the rest of it being uranium-238, which is mildly radioactive but not fissile). To make reactor fuel, one generally needs to “enrich” the uranium, increasing the concentration of U-235, typically, for civil purposes, to a few percent. Enrichment is a complex technology inextricably connected with nuclear weapons – the enrichment needed to make weapons grade uranium is different in degree, not kind, from that needed for civil power. One also needs to consider how the fissile material – the nuclear fuel – is to be packaged in the reactor.

Secondly, one needs a moderator. The neutrons produced in fission reactions are going too fast to be efficient at inducing further fissions, so they need to be slowed down. (As I’ll discuss below, it is possible to have a reactor without moderation – a so-called fast-neutron reactor. But because of the lower absorption cross-section for fast neutrons, this needs to use a much higher fraction of fissile material – highly enriched uranium or plutonium).

In a normal reactor, the purpose of the moderator is to slow down the neutrons. Moderators need to be made of a light element which doesn’t absorb neutrons too much. The main candidates are carbon (in the form of graphite), hydrogen (in the form of water) or deuterium, the heavier isotope of hydrogen (in the form of water). Hydrogen absorbs neutrons more than deuterium does, so it’s less ideal as a moderator, but is obviously much cheaper.

Finally, one needs a coolant, which takes away the heat the fission reactor produces, so the heat can be extracted and converted to electricity in some kind of turbine. The choice here, in currently operating reactors, is between normal water, heavy water, and a non-reactive gas (either carbon dioxide or helium). Experimental designs use more exotic cooling materials like molten salts and liquid metals.

So the fundamental design choice for a reactor is the choice of moderator and coolant – which dictate, to some extent, the nature of the fuel. The variety of possible combinations of moderators and coolants means that the space of possible reactor designs is rather large, but only a handful from this choice of potentials technologies is in widespread use. The most common choice is to use ordinary water as both coolant and moderator – in so-called light water reactors (“light water” in contrast to “heavy water”, in which the normal hydrogen of ordinary water is replaced by hydrogen’s heavier isotope, deuterium). Light water is an excellent coolant, cheap, and convenient to use to drive a steam turbine to generate electricity. But it’s not a great moderator – it absorbs neutrons, so a light water reactor needs to use enriched uranium as fuel, and the core needs to be relatively small.

These weren’t problems for the original use of pressurised water reactors (PWRs, the most common type of light water reactor. The other variety, Boiling Water Reactors, similarly uses light water as both coolant and moderator, the difference with PWRs being that steam is generated directly in the reactor core rather than in a secondary circuit). These were designed to power submarines, in a military context where enriched uranium was readily available, and where a compact size is a great advantage. But it underlies the great weakness of light water reactors – their susceptibility to what’s known as a “loss of coolant accident”. The problem is that, if for some reason the flow of cooling water is stopped, even if the chain reaction is quickly shut down (and this isn’t difficult to do) the fuel produces so much heat through its radioactive decay that it can melt the fuel rods, as happened in Three Mile Island. What’s worse, the alloy that the fuel rod is clad in can react with hot steam to produce hydrogen, that can explode, as happened at Fukushima.

In contrast to light water, heavy water is an excellent moderator. Although deuterium and (normal) hydrogen are (nearly) chemically identical, the interaction of neutrons with their nuclei is very different – deuterium absorbs neutrons much less than hydrogen. Heavy water is just as good a coolant as light water, so a reactor with heavy water as both moderator and coolant can be run with unenriched uranium oxide as fuel. The tradeoff, then, is the ability to do without a uranium enrichment plant, at the cost having to use expensive and hard to make heavy water in large quantities. This is the basis of the Canadian CANDU design.

Another highly effective moderator is graphite (if it’s of sufficiently high purity). But being a solid, a separate coolant is needed. The UK’s Magnox stations used carbon dioxide as a coolant and natural, unenriched uranium metal as a fuel; it was a development of this design that formed the Advanced Gas Cooled Reactor (AGR), which used lightly enriched uranium oxide as a fuel. The use of gas rather than water as the coolant makes it possible to run the reactor at a higher temperature, which allows a more efficient conversion of heat to electricity, while the lower neutron absorption of the moderator and coolant than for light water means that the core is less compact.

Another approach is to use graphite as the moderator, but to use light water as the coolant. The use of light water reduces the neutron efficiency of the design, so the fuel needs to be lightly enriched. This is the basis of the Soviet Union’s RBMK reactor. This design is cheap to build, but it has a very ugly potential failure mode. If the cooling water starts to boil, the bubbles of steam absorb fewer neutrons than the water they replace, and this means the efficiency of the chain reaction can increase, leading to a catastrophic runaway loss of control of the fission reaction. This is what happened at Chernobyl, the world’s worst nuclear accident to date.

Understanding the decision to choose advanced gas cooled reactors: the history of the UK nuclear weapons programme, and its influence on the civil nuclear programme

In the beginning, the purpose of the UK’s nuclear programme was to produce nuclear weapons – and the same can be said of other nuclear nations, USA and USSR, France and China, India and Pakistan, Israel and North Korea. The physics of the fission reaction imposes real constraints on the space of possible reactor designs – but history sets a path-dependence to the way the technology evolved and developed, and this reflects the military origins of the technology.

A nuclear weapon relies on the rapid assembly of a critical mass of a highly fissile material. One possible material is uranium – but since it’s only the minority Uranium-235 isotope that is fissile, it’s necessary to separate this from the Uranium-238 that constitutes 99.28% of the metal as it is found in nature. The higher the degree of enrichment, the smaller the critical mass required; in practise, enrichments over 60% are needed for a weapon. There is an alternative – to use the wholly artificial element plutonium. The fissile isotope plutonium-239 is formed when uranium-238 absorbs a neutron, most conveniently in a fission reactor.

As the history of nuclear weapons is usually told, it is the physicists who are usually given the most prominent role. But there’s an argument that the crucial problems to be overcome were as much ones of chemical engineering as physics. There is no chemical difference between the two uranium isotopes that need to be separated, so any process needs to rely on physical properties that depend on the tiny difference in mass between the two isotopes. On the other hand, to obtain enough plutonium to build a weapon, one needs not just to irradiate uranium in a reactor, but then use chemical techniques to extract the plutonium from a highly radioactive fuel element.

In 1941, the wartime UK government had concluded, based on the work of the so-called MAUD committee, that nuclear weapons were feasible, and began an R&D project to develop them – codenamed “Tube Alloys”. In 1943 the UK nuclear weapons programme was essentially subsumed by the Manhattan Project, but it was always the intention that the UK would develop nuclear weapons itself when the war ended. The pre-1943 achievements of Tube Alloys are often overlooked in the light of the much larger US programme, but one feature of it is worth pointing out. The UK programme was led by the chemical giant ICI; this was resented by the academic physicists who had established the principles by which nuclear weapons would work. However, arguably it represented a realistic appraisal of where the practical difficulties of making a weapon would lie – in obtaining sufficient quantities of the fissile materials needed. Tube Alloys pursued an approach to uranium enrichment based on the slightly different mass-dependent diffusion rates of uranium hexafluoride through porous membranes. This relied on the expertise in fluorine chemistry developed by ICI in Runcorn in the 1930’s, and came to fruition with the establishment of a full-scale gaseous diffusion plant in Capenhurst, Cheshire, in the late 40s and early 50s.

After the war, the UK was cut off from the technology developed by the USA in the Manhattan project, with the 1946 McMahon Act formally prohibiting any transfer of knowledge or nuclear materials outside the USA. The political imperative for the UK to build its own nuclear weapon is summed up by the reported comments of Ernest Bevin, the Foreign Secretary in the postwar Labour government: “We’ve got to have this thing over here, whatever it costs. We’ve got to have the bloody Union Jack on top of it.”

But even before a formal decision to make a nuclear weapon was made, in 1947, the infrastructure for the UK’s own nuclear weapons programme had been put in place, reflecting the experience of the returning UK scientists who had worked on the Manhattan Project. The first decision was to build a nuclear reactor in the UK, to make plutonium. This reflected the experience of the Manhattan project, which had highlighted the potential of the plutonium route to a nuclear weapon.

To put it crudely, it turned out to be easier to make a bomb from highly enriched uranium than from plutonium, but it was easier to make plutonium than highly enriched uranium. The problem with the plutonium route to the bomb is that irradiating uranium-235 with neutrons produces not just the fissile isotope Plutonium-239, but trace amounts of another isotope, Plutonium-240. Plutonium-240 undergoes spontaneous fission, emitting neutrons. Because of this the simplest design of a nuclear weapon – the gun design used for the Hiroshima bomb – will not work for plutonium, as the spontaneous fission causes premature detonation and low explosive yields. This problem was solved by the development of the much more complex implosion design, but there are still hard limits on the levels of plutonium-240 that can be tolerated in weapons grade plutonium, and these impose constraints on the design of reactors used to produce it.

The two initial UK plutonium production reactors were built in Sellafield – the Windscale Piles. The fuel was natural, unenriched, uranium (necessarily, because the uranium enrichment plant in Capenhurst had not yet been built), so this dictated the use of a graphite moderator. The reactors were air-cooled. The first reactor started operations in 1951, with the first plutonium produced in early 1952, enabling the UK’s first, successful, nuclear weapon test in October 1952.

But even as the UK’s first atom bomb test was successful, it was clear that the number of weapons the UK’s defense establishment was calling for would demand more plutonium than the Windscale piles could produce. At the same time, there was growing interest in using nuclear energy to generate electricity, at a time when coal was expensive and in short supply, and oil had to be imported and paid for with scarce US dollars. The decision was made to combine the two goals, with second generation plutonium producing reactors also producing power. The design would use graphite moderation, as in the Windscale piles, and natural uranium as a fuel, but rather than being air-cooled, the coolant was high pressure carbon dioxide. The exclusion of air made it possible to use a magnesium alloy as the casing for the fuel, which absorbed fewer neutrons than the aluminium used before.

The first of this new generation of dual purpose reactors – at Calder Hall, near Sellafield – was opened in 1956, just four years after the decision to build it. Ultimately four reactors of this design were produced – two at Calder Hall, and two at Chapelcross in Scotland. It’s important to stress that, although these reactors did supply power to the grid, they were optimised to produce plutonium for nuclear weapons, not to produce electricity efficiently. The key feature that this requirement dictated was the need to remove the fuel rods while the reactor was running; for weapons grade plutonium the exposure of uranium-238 to neutrons needs to be limited, to keep the level of undesirable plutonium 240 low. From the point of view of power production, this is sub-optimal, as it significantly lowers the effective fuel efficiency of the reactor; it also produces significantly greater quantities of nuclear waste.

The first generation of UK power reactors – the Magnox power stations – were an evolution of this design. Unlike Calder Hall and Chapelcross, they were under control of the Central Electricity Generating Board, rather than the Atomic Energy Authority, and were run primarily to generate electricity rather than weapons grade plutonium, using longer burn up times that produced plutonium with high concentrations of Pu-240. This so-called “civil plutonium” was separated from the irradiated fuel – there is now a stockpile of about 130 tonnes of this. Did the civil Magnox reactors produce any weapons grade plutonium? I don’t know, but I believe that there is no technical reason that would have prevented that.

Fast neutron reactors and the breeder dream

A reactor that doesn’t have a moderator is known as a fast-neutron reactor. This uses neutrons at the energy they have when emitted from the fission reaction, without slowing them down in a moderator. As mentioned above, the probability of a fast neutron colliding with a fissile nucleus is smaller than for a slow neutron, so this means that a fast-neutron reactor needs to use a fuel with a high proportion of fissile isotopes – either uranium highly enriched in U-235, or plutonium (both need to be in the form of the oxide, so the fuel doesn’t melt). In the absence of a moderator, the core of a fast neutron reactor is rather small, producing a lot of heat in a very small volume. This means that neither water nor gas is good enough as a coolant – fast neutron reactors to date have instead used liquid metal, most commonly molten sodium. As one might imagine, this poses considerable engineering problems.

But fast-neutron reactors have one remarkable advantage which has made many countries persist with a fast-neutron reactor programme, despite the difficulties. A fission reaction prompted by a fast neutron produces, on average, more additional neutrons than fission prompted by a slow neutron. This means that a fast-neutron reactor can produce more neutrons than are needed to maintain the chain reaction, and these additional neutrons can be used to “breed” additional fissile material. In effect, a fast-neutron reactor can produce more reactor fuel than it consumes, for example by converting non-fissile uranium-238 into fissile plutonium-239, or converting non-fissile thorium-232 into another fissile isotope of uranium, uranium-233.

In the 1940s and 50s, the availability of uranium relative to the demand of weapons programmes was severely limited, so the prospect of extracting energy from the much more abundant U-238 isotope was very attractive. Design studies for a UK fast neutron reactor started as early as 1951, with the strong backing of Christopher Hinton, the hard-driving ex-ICI engineer who ran the UK’s nuclear programme. An experimental fast reactor was built at Dounreay, in Caithness, which was completed by 1959. Using this experience, it was decided in 1966 to build a prototype fast power reactor, cooled with liquid sodium, with a 250 MW design electrical output.

The worldwide expansion of nuclear power in the 1970s seemed to strengthen the case for a breeder reaction even further, so the commissioning of the prototype fast reactor in 1974 seemed timely. However, in common with the experience of fast reactors elsewhere in the world, reliability was a problem, and the Dounreay reactor never achieved even 50% of its design output. Moreover, following the 1979 Three Mile Island accident, the worldwide expansion of nuclear power stalled, and the price of Uranium collapsed, undercutting the economic rationale for breeder reactors.

The winding down of the UK’s experiment with fast breeders was announced in Parliament in 1988: “The Government have carried out a review of the programme in the light of the expectation that commercial deployment of fast reactors in the United Kingdom will not now be required for 30 to 40 years. Our overall aim in the review has been to retain a position in the technology for the United Kingdom at economic cost.” Operations on the Dounreay prototype fast breeder came to an end in 1994, and in effect the UK’s position in the technology was lost. In the UK, as elsewhere in the world, the liquid metal cooled fast neutron breeder reactor proved a technological dead-end, where it remains – for now.

Submarines

Bombs are not the only military application of nuclear energy. Even before the 2nd World War ended, it was appreciated that a nuclear reactor would be an ideal power source for a submarine. Diesel-electric submarines need to surface frequently to run their engines and recharge their batteries; a submarine with a long-term power source that didn’t need oxygen, able to remain underwater for months on end, would be transformational for naval warfare. In the UK, work on a naval reactor began in the early 1950’s, and the UK’s first nuclear powered submarine, HMS Dreadnought, was launched in 1960. But HMS Dreadnought didn’t use UK nuclear technology; instead it was powered by a reactor of US design, a pressurised water reactor, using light water both as moderator and as coolant.

The father of the US nuclear navy was an abrasive and driven figure, Admiral Rickover. Rickover ran the US Navy’s project to develop a nuclear submarine, initially working at Oak Ridge National Laboratory in the late 1940’s. He selected two potential reactor designs – the pressurised water reactor devised by the physicist Alvin Weinberg, and a liquid sodium cooled, beryllium moderated reactor. Both were developed to the point of implementation, but it was the PWR that was regarded as the best (and particularly, the most reliable) design, and has been subsequently used for all Western nuclear submarines.

The prototype reactor went critical at a land-based test installation in 1953. At this time the first submarine was already under construction; the USS Nautilus went to sea only two years later, in 1955. The UK’s effort lagged considerably behind. In 1958, following the thawing of nuclear relations between the UK and the USA, Admiral Rickover offered the UK a complete nuclear propulsion system. It seems that this deal was sealed entirely on the basis of the personal relationship between Rickover and the UK’s Admiral of the Fleet, Lord Mountbatten. It came with two conditions. The first was that it should be a company to company deal, between the US contractor Westinghouse and the UK firm Rolls-Royce, rather than a government to government agreement. The second was that it was a one-off – Rolls-Royce would have a license to the Westinghouse design for a pressurised water reactor, but after that the UK was on its own. These two conditions have meant that there has been a certain separation between the UK’s naval reactor programme, as Rolls-Royce has developed further iterations of the naval PWR design, and the rest of its national nuclear enterprise.

Rickover’s rapid success in creating a working power reactor for submarines had far-reaching consequences for civil nuclear power. President Eisenhower’s 1953 “Atoms for Peace” speech committed the USA to developing civilian applications, and the quickest way to deliver on that was to build a nuclear power station building on the submarine work. Shippingport opened in 1957 – it was essentially a naval reactor repurposed to power a static power station, and was wholly uneconomic as an energy source, but it launched Westinghouse’s position as a supplier of civil nuclear power plants. Pressurised water reactors designed at the outset for civil use would evolve in a different direction to submarine reactors. For a submarine, reactors need to be highly compact, self-contained, and should be able to go for long periods without being refuelled, all of which dictates the use of highly enriched – essentially weapons grade – uranium. In civil use, to have any chance of being economic, uranium at much lower enrichment levels must be used, but designs can be physically bigger, and refuelling can be more frequent. By the 1960’s, Westinghouse was able to export civil PWRs to countries like Belgium and France, and it was a descendant of this design that was built in the UK at Sizewell B.

Imagined futures, alternative histories, and technological lock-in

The path of technological progress isn’t preordained, but instead finds a route through a garden of forking paths, where at each branch point the choice is constrained by previous decisions, and is influenced by uncertain guesses about where each of the different paths might lead.

So it’s a profound mistake to suppose that in choosing between different technological approaches to nuclear power, it is simply a question of choosing between a menu of different options. The choice depends on history – a chain of previous choices which have established which potential technological paths have been pursued and which ones have been neglected. It’s this that establishes what comprises the base of technological capability and underpinning knowledge – both codified and tacit – that will be exploited in the new technology. It depends on the existence of a wider infrastructure. A national nuclear programme comprises a system, which could include uranium enrichment facilities, fuel manufacturing, plutonium separation and other waste handling facilities – and, as we’ve seen, the scope of that system depends not just on a nation’s ambitions for civil nuclear power, but on its military ambitions and its weapons programme. And it depends on visions of the future.

In the early years of the Cold War, those visions were driven by paranoia, and a not unjustified fear of apocalypse. The McMahon act of 1946 had shut the UK out of any collaboration on nuclear weapons with the USA; the Soviet Union had demonstrated an atom bomb in 1949, following up in 1955 with a thermonuclear weapon in the megaton range. The architects of the UK nuclear programme – the engineer Christopher Hinton, and physicists William Penney and John Cockcroft, drove it forward with huge urgency. Achievements like delivering Calder Hall in just 4 years were remarkable – but achieved at the cost of cut corners and the accumulation of massive technical debt. We are still living with the legacy of that time – for example, in the ongoing, hugely expensive, clean-up of the nuclear waste left over in Sellafield from that period.

Energy worries dominated the 1970s, nationally and internationally. Conflicts in the Middle East led to an oil embargo and a major spike in the price of oil. The effect of this was felt particularly strongly in the USA, where domestic oil production had peaked in 1970, giving rise to fundamental worries about the worldwide exhaustion of fossil fuels. In the UK, industrial action in the coal mining industry led to rolling power cuts and a national three day week; the sense of national chaos leading to the fall of the Heath government. Fuel prices of all kinds – oil, coal and gas – seemed to be inexorably rising. For energy importers – and the UK was still importing around half its energy in the early 1970’s – security of energy supplies suddenly seemed fragile. In this environment, there was a wide consensus that the future of energy was nuclear, with major buildouts of nuclear power carried out in France, Germany, Japan and the USA.

By the 1990s, things looked very different. In the UK, the exploitation of North Sea oil and gas had turned the UK from an energy importer to an energy exporter. All aspects of fossil fuel energy generation and distribution had been privatised. In this world of apparent energy abundance, energy was just another commodity whose supply could safely be left to the market. And in an environment of high interest rates and low fuel prices, there was no place in the market for nuclear energy.

But if decisions about the technological directions are driven by visions of the future, they are constrained by the past. What is possible is determined by the infrastructure that’s been built already – uranium enrichment plants, reprocessing facilities, and so on. The nature of the stock of knowledge acquired in past R&D programmes will be determined by the problems that had emerged during those programmes, so starting work on a different class of reactors would render that knowledge less useful and necessitate new, expensive programmes of research. The skills and expertise that have been developed in past programmes – whether that is in the understanding of reactor physics that is needed to run them efficiently, or in the construction and manufacturing techniques to build them cheaply effectively – will be specific to the particular technologies that have been implemented in the past.

All this contributes to what is called “technological lock-in”. It isn’t obvious that the first class of power reactor ever developed – the pressurised water reactor – must be the optimum design, out of the large space of possible reactor types, particularly as it was originally designed for a different application – powering submarines – to the one it ended up being widely implemented for – generating power in static, civil power stations.

The UK’s decision to choose the Advanced Gas Cooled Reactor

So why did the UK’s state technocrats make the decision to roll out Advanced Gas Cooled reactors – and having made that decision, why did it take so long to reverse it? The straightforward answer is that this was another case of technological lock-in – the UK had developed an expertise in gas-cooled reactors which was genuinely world-leading, as a result of its decision in the Magnox programme to merge the goals of generating electricity and producing military plutonium. I believe there was a real conviction that the gas-cooled reactor was technically superior to the light-water designs, coupled with a degree of pride that this was an area that the UK had led in. As a UKAEA expert on gas-cooled reactors wrote in 1983, “Few other countries had the skills or resources to pioneer [gas-cooled reactors]; the easy option of the light water reactor developed by someone else has been irresistible”.

There were specific reasons to favour the AGR over PWRs – in particular, in the UK programmes there were worries about the safety of PWRs. These were particularly forcefully expressed by Sir Alan Cottrell, an expert on metallurgy and its applications in the nuclear industry, who was government Chief Scientific Advisor between 1971 and 1974. Perhaps, after Three Mile Island and Fukushima, one might wonder whether these worries were not entirely misplaced.

Later in the programme, while there may have been some agreement from its proponents that the early AGR building programme hadn’t gone well, there was a view that the teething problems had been more or less ironed out. I haven’t managed to find an authoritative figure for the final cost of the later AGR builds, but in 1980 it was reported in parliament that Torness was on track to be delivered at a budget of £1.1 bn (1980 prices), which is not a great deal different from the final cost of the Sizewell B PWR. Torness, like Sizewell B, took 8 years to build.

But I wonder whether the biggest factor in the UK’s nuclear establishment’s preference for the AGR over the PWR was from a sense that the AGR represented another step on a continuing path of technological progress, while the PWR was a mature technology whose future was likely to consist simply of incremental improvements. Beyond the AGRs, the UK’s nuclear technologists could look to the next generation of high temperature reactors, whose prototype – Dragon, at Winfrith – was already in operation, with the fast breeder reactor promising effectively unlimited fuel for a nuclear powered future. But that future was foreclosed by the final run-down of the UK’s nuclear programme in the 80s and 90s, driven by the logic of energy privatisation and cheap North Sea gas.

In the third and final part of this series, I will consider how this history has constrained the UK’s faltering post 2008 effort to revive a nuclear power industry, and what the future might hold.

Sources

For the history of the UK’s nuclear programme, both civil and military, I have relied heavily on: An Atomic Empire: A Technical History Of The Rise And Fall Of The British Atomic Energy Programme, by Charles Hill (2013)

Churchill’s Bomb, by Graham Farmelo (2013) is very illuminating on the early history of the UK’s atomic weapons programme, and on the troubled post-war nuclear relationship between the UK and USA.

On the technical details of nuclear reactors, Nuclear power technology. Volume 1. Reactor technology, edited by Walter Marshall (OUP, 1983) is still very clear. Marshall was Chair of the UK Atomic Energy Authority, then Chief Executive of the Central Electricity Generating Board, and most of the contributors worked for the UKAEA, so in addition to its technical value, the tone of the book gives some flavour of the prevailing opinion in the UK nuclear industry at the time.

On Sir Alan Cottrell’s opposition to PWRs on safety grounds, see his biographical memoir. This also provides an interesting glimpse at how intimately linked the worlds of academia, government scientific advice, and the UK’s nuclear programme (with the occasional incursion by Royalty) were in the 1960s and 70s.

Research and Innovation in a Labour government

Above all, growth. The new government knows that none of its ambitions will be achievable without a recovery from the last decade and a half’s economic stagnation. Everything will be judged by the contribution it can make to that goal, and research and innovation will be no exception.

The immediate shadow that lies over UK public sector research and innovation is the university funding crisis. The UK’s public R&D system is dependent on universities to an extent that’s unusual by international standards, and university research depends on a substantial cross-subsidy, largely from overseas student fees, which amounted to £4.9 bn in 2020. The crisis in HE is on Sue Gray’s list of unexploded bombs for the new government to deal with.

But it’s vital for HE to be perceived, not just as a problem to be fixed, but as central to the need to get the economy growing again. This is the first of the new Government’s missions, as described in the Manifesto: “Kickstart economic growth to secure the highest sustained growth in the G7 – with good jobs and productivity growth in every part of the country making everyone, not just a few, better off.”

To understand how the government intends to go about this, we need to go back to the Mais Lecture, given this March by the new Chancellor of the Exchequer. As I discussed in an earlier post, the questions Reeves poses in her Mais Lecture are the following: “how Britain can pay its way in the world; of our productive capacity; of how to drive innovation and diffusion throughout our economy; of the regional distribution of work and opportunity; of how to mobilise investment, develop skills and tackle inefficiencies to modernise a sclerotic economy; and of energy security”.

Reeves calls her approach to answering these questions “securonomics”; this owes much to what the US economist Dani Rodrik calls “productivism”. At the centre of this will be an industrial strategy, with both a sector focus and a regional focus.

The sector focus is familiar, supporting areas of UK comparative advantage: “our approach will back what makes Britain great: our excellent research institutions, professional services, advanced manufacturing, and creative industries”.

The regional aspect aims to develop clusters and seeking to unlock the potential agglomeration benefits in our underperforming big cities, and connects to a wider agenda of further English regional devolution, building on the Mayoral Combined Authority model.

There is “a new statutory requirement for Local Growth Plans that cover towns and cities across the country. Local leaders will work with major employers, universities, colleges, and industry bodies to produce long-term plans that identify growth sectors and put in place the programmes and infrastructure they need to thrive. These will align with our national industrial strategy.”

Universities need to at the heart of this. The pressure will be on them, not just to produce more spin-outs and work with industry, but also to support the diffusion of innovation across their regional economies. There are no promises of extra money for science – instead, as in other areas, the implicit suggestion seems to be that policy stability itself will yield better value:

“Labour will scrap short funding cycles for key R&D institutions in favour of ten-year budgets that allow meaningful partnerships with industry to keep the UK at the forefront of global innovation. We will work with universities to support spinouts; and work with industry to ensure start-ups have the access to finance they need to grow. We will also simplify the procurement process to support innovation and reduce micromanagement with a mission-driven approach.”

Beyond the government’s growth imperative, its priorities are defined by its other four missions; in clean energy, tackling crime, widening opportunities for people, and rebuilding the healthcare system. Research and Innovation, and the HE sector more widely, need to play a central role in at least three of these missions.

A commitment to cheap, zero carbon electricity by 2030 is a very stretching target, despite some advantages: “our long coast-line, high winds, shallow waters, universities, and skilled offshore workforce combined with our extensive technological and engineering capabilities.” Here the “strategy” part of industrial strategy is going to be vital – getting the balance right between the technologies that the UK will develop itself and those it imports from international balance will be vital. The call is to double onshore wind, triple solar, and quadruple offshore wind. There is a commitment to new nuclear build, including small modular reactors, and recognition of the importance of upgrading the grid and improving home insulation. R&D will need to be focused to support renewables, new nuclear and grid upgrades.

In health, commitments to address health inequalities imply higher priority on prevention, with high hopes placed on data and AI: “the revolution taking place in data and life sciences has the potential to transform our nation’s healthcare. The Covid-19 pandemic showed how a strong mission-driven industrial strategy, involving government partnering with industry and academia, could turn the tide on a pandemic. This is the approach we will take in government.” This statement gains more significance following the appointment of Sir Patrick Vallance as Science Minister, as I’ll discuss below.

There’s long been a tension between the high hopes that a succession of UK governments have placed on a strong life sciences sector, and a perception that the NHS is an organisation that’s not particularly innovative. So it’s unsurprising to read that “as part of Labour’s life sciences plan, we will develop an NHS innovation and adoption strategy in England. This will include a plan for procurement, giving a clearer route to get products into the NHS coupled with reformed incentive structures to drive innovation and faster regulatory approval for new technology and medicines.” I am sure this is correct in principle, and many such opportunities exist, but it will be difficult to take this forward until the immediate funding crisis faced by most parts of the NHS is overcome.

The new government’s fourth mission is to “break down barriers to opportunity”. A big part of this is to reform post-16 education (in England, one should add, as education is a devolved responsibility in Wales, Scotland and Northern Ireland). Universities will need to get used to there being more focus on the neglected FE sector, from which specialised “Technical Excellence Colleges” will be created, and should ready themselves for a more collaborative relationship with their neighbouring FE colleges: “to better integrate further and higher education, and ensure high-quality teaching, Labour’s post-16 skills strategy will set out the role for different providers, and how students can move between institutions, as well as strengthening regulation.”

There’s one important priority that wasn’t in the original list of five missions, but can’t now be ignored: the threatening geopolitical situation inevitably means a renewed focus on defence. The new government is explicit about the role of the defence industrial base in this:

“Strengthening Britain’s security requires a long-term partnership with our domestic defence industry. Labour will bring forward a defence industrial strategy aligning our security and economic priorities. We will ensure a strong defence sector and resilient supply chains, including steel, across the whole of the UK. We will establish long-term partnerships between business and government, promote innovation, and improve resilience.”

As the MoD budget grows, defence R&D will grow in importance. It’s perhaps not widely enough appreciated how much, following the end of the Cold War, the major focus of the UK’s research effort switched from defence to health and life sciences, so this will represent a partial turn-around of a decades-long trend.

How is the new government actually going to achieve these ambitious goals? Much stock is being placed on “mission led government”, in which Whitehall departments effortlessly collaborate to deliver goals which cross the boundaries between departments. In its first day, the new government made one unexpected announcement, which I think offers a clue as to how serious it is about this. That was the appointment of Sir Patrick Vallance as Science Minister.

Vallance, of course, has an outstanding background to be a Science Minister, as a highly successful researcher who then led R&D at one of the UK’s few world-class innovation led multinationals, GlaxoSmithKline. But, in the context of the new government’s ambitions, I think his most significant achievement, as Government Chief Scientific Advisor in the covid pandemic, was to set-up the Vaccine Task Force. If that’s going to be a model for how “mission led government” might work, we might see some exciting and rapid developments.

Research and innovation has a huge part to play in addressing the pressing challenges that face the new government, which necessarily cross Whitehall fiefdoms. The ambition in setting up the Department of Science, Innovation and Technology was to have a department coordinating science and innovation across the whole of government; it’s difficult to imagine anyone better qualified to realise this ambition than Vallance.

Quotations from the 2024 Labour Manifesto.

Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain
https://www.bsg.ox.ac.uk/sites/default/files/2023-12/BSG-WP–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

All things begin & end on Albion’s Rocky Druid shore

I’m 63 now, so the idea that I should still be taking part in “adventure sports” is perhaps a little ridiculous. Nonetheless, rock climbing has been so much part of my life for so long that I still try and get out, generally for easy short climbs on the gritstone cliffs near my home in Derbyshire. There are things that I’ve done in my younger days that I have put behind me without much regret – I won’t be climbing frozen waterfalls in New England again, or winter climbing in the Lakes or Scotland. I do miss snowy mountains a bit, though I know I will never be a serious alpinist. But there’s one variety of cllmbing that I think is very special, that I look back on with real pleasure, and that I think maybe I should try to involve myself in once again, even if at a much lower level than before. That is rock climbing on Britain’s sea-cliffs, a branch of the pastime with its own unique atmosphere and set of demands.

I started rock climbing seriously when I was 14 or so; at that time it was my family’s habit to spend every summer in St Davids, Pembrokeshire, near where my mother had grown up. The coastline of Pembrokeshire is spectacular – a succession of coves, headlands, and cliffs, pounded by the open Atlantic waves. At the time, the idea of climbing the cliffs of Pembrokeshire was in its infancy. Rock climbing on the granite cliffs of Cornwall was well-established, and the counter-cultural climbing scene of North Wales had created hard and serious routes on the sea-cliffs of Gogarth, on Anglesea. But what little climbing on the cliffs of Pembrokeshire was recorded in a slim guidebook by Colin Mortlock, published in 1974, not by the Climbers Club or any of the establishment sources of climbing information, but by a local publishing house more associated with postcards and wildlife guides than rock climbing.

The first ever guidebook to climbing in Pembrokeshire, by Colin Mortlock. Just 150 pages long (the current guidebook runs to 5 volumes), it often failed in the basic function of telling one where the routes go (and, in one or two cases, even where the cliffs actually are), but was a source of great inspiration. The cover photograph is of Colin Mortlock himself climbing “Red Wall” at Porthclais.

My imagination was seized by the cover of this book, showing Mortlock himself powering up a sheer, apparently overhanging, wall above a boiling sea. The route was called “Red Wall”, and was graded “severe” – that was the kind of climbing I wanted to do. In 1977 I persuaded my school friend and climbing partner Mark Miller to come and stay with my family in Pembrokeshire so we could give this sea-cliff climbing business a try.

Mark and I were, by that time, reasonably confident climbers up to grades of severe, with some level of basic competence at rope work and protection, and in possession of the basic gear – ropes, harnesses, the nuts and slings that were state of the art at the time. We studied the guidebook and looked at the picture. It looked steep – but surely, if it were that overhanging, the holds must be good. We’d done routes like that on the gritstone cliffs of Derbyshire, we thought – tough routes for the grade, but within our grasp.

But we’d misjudged it. The cover picture turned out to wildly tilted; it’s an off-vertical slab, maybe 70 degrees or so, blessed with perfect sharp, incut finger holds. We romped up it. Severe? It would barely be V. Diff in the Peak District! But it remains one of my favourite routes – I’ve probably done it twenty times since then. Few routes capture so completely the joy of sea-cliff climbing at its friendliest, with easy access to the base of the route, clear blue water sloshing gently below one’s feet, lichen and rock samphire on beautiful pink rock, footholds and handholds in all the right places.

Mark and I got better and more experienced at climbing. By the time we left school I was a confident leader of climbs VS in grade, tentatively trying things that were a bit harder. Mark had by force of will converted himself into an extreme leader, with a specialism in bold, protection-less slabs. In the summer before I went to University, in 1980, we persuaded a relatively new friend, Peter Carter, to come with us to Cornwall and Devon. Or, more accurately, we persuaded Peter to take us there – recently discharged from the Royal Marines, he had the unique asset of owning, and knowing how to drive, a small van.

Our trip started at the very tip of Cornwall – on the granite cliffs of West Penwith. We did some fine climbs on the traditional cliffs of solid granite, like Bosigran and Chair Ladder. But it was on the return trip that our sea-cliff horizons were truly expanded. A bleak headland near the north coast village of St Agnes is known to climbers as Carn Gowla, with three hundred foot cliffs falling vertically into the deep sea.

The route we chose was a HVS called Mercury. The first problem is getting to the base of the route – the only way was to abseil. We tied two 150ft 9 mm ropes together, anchored them to a good thread in the slope above the groove, and set off down. At the bottom, a ledge about twenty feet above the waves, there’s a huge sense of commitment – the easiest way out is the route Mercury, all 270 ft of it. In the end, the technical difficulties weren’t beyond us, though the exposure, commitment, and the dubious, vegetated rock were very far from the friendly crags of the Peak District.

Another highlight of that trip was my first encounter with the spectacular scenery on the stretch of coast north from Bude to Hartland. Known as the Culm Coast, it’s composed of thinly bedded sandstones and shales that have been dramatically folded, and then sliced abruptly by the sea. Not only is it the most dramatic coastal scenery in England, it also provides a variety of great climbs, ranging from short and solid sea-washed slabs to 400 foot climbs, almost of mountain scale, on rock whose solidity is not above suspicion. I’ve returned to it again and again.

There’s something uniquely memorable, I think, about sea cliff climbs, and even decades on I vividly remember the climbs and the people I did with them with. On the Culm Coast there’s a 400 ft climb called Wrecker’s Slab. The first time I did it was with my college friend Jonathan Sharp, I think just a few months before he tragically died in the Alps. It wasn’t hard, but its scale and looseness gave it quite a reputation, well-deserved.

In Pembrokeshire, amongst the cliffs north of St Davids, Trwyn Llwyd is a fabulous buttress of solid gabbro. I did Barad with Sean Smith; its crux felt like a VS gritstone jamming crack – 200 feet directly above the sea. Craig Coetan is a much easier crag, above a little inlet which attracts curious seals. In my teenage years I explored these gentle slabs with my father.

Back in the Culm coast, the hardest route I did was with my old and much missed friend, the late Mark Miller. Blackchurch is a crag with a sinister atmosphere that entirely lives up to its name; Archtempter is one of the classics of the main cliff – a soaring groove line now graded E3. Mark did the first pitch, thin and loose, and I led the widening crack above through an overhang. At the top, we so far forgot ourselves to shake hands.

Blackchurch, North Devon. The obvious groove is the line of “Archtempter”; the (just visible) climbers are Mark Miller at the halfway stance, and above him the author, just about to enter the overhanging section. It’s not a great photo, but it does convey something of the demonic atmosphere of this crag.

Looking for new routes provides another, exploratory dimension to sea-cliff climbing; I had many memorable trips with Brian Davison, who believed that the purpose of guide books was to tell you where not to climb. In the Lleyn Peninsula, we did one of the earliest routes up Craig Dorys; we called it “Error of Judgement”. As the guidebook says: “It certainly was, an appallingly loose line”.

In North Pembrokeshire Penbwchdy is a long headland with a long run of big, vegetated cliffs. I’d been there with Jonathan Sharp but failed to get up anything – we’d scrambled down a grassy slope, done a 150 ft abseil to sea level to find our way forward was to cross a deep but narrow inlet on the remains of a wrecked ship. Not relishing the idea of balancing across on an old propeller shaft, over which waves were breaking, we went back the way we came.

The great pioneer of sea-cliff climbing, Pat Littlejohn, had a done a route at the far end of Penbwchdy, on a section of cliff he called New World Wall, accessed by a long low-tide sea level traverse after the shipwreck crossing that Jonathan and I had balked at. Done in 1974, I suspect Terranova, as the route was called, hadn’t had a lot of repeats, given the awkward approach. But Brian and I later found another way down to New World Wall, with some careful route finding and a final scramble. Brian led a new route up this, which he called “New Dawn Fades”, at E4, a good onsight lead up a steep groove.

The best new route I ever did was on the sandstone cliffs south of St Davids, a couple of miles east of Porthclais. A pamphlet describing new routes reported a new crag on the headland near Caerfai, with a HVS called “Amorican”, now a classic and often repeated route. I kicked myself – I’d walked past that crag innumerable times but never noticed its potential. But to the right of the crack of Amorican is a sweeping concave slab of sandstone, unclimbed in 1984. Climbing with Mary Rack, I found a circuitous line; a thin sloping crack demanded 20 ft of intricate and precise footwork, with only tiny holds for the hands. I called it “Uncertain Smile”.

Sea cliff climbing undoubtedly has more danger than the landward variety – loose rock, tidal conditions, big waves. One experience in Cornwall was the closest I have (knowingly) come to dying. My climbing partner was José Luis Bermudez; we were staying at the Climbers Club hut at Bosigran, where I remember being hubristically superior, as experienced climbers and successful young academics, to the party of university students we were sharing the hut with.

The next day we went to Fox Promontory, a slightly obscure granite headland on the south side of the West Penwith peninsula. We scrambled down above the March seas to a sloping platform, maybe 20 feet above the level of the sea. But freak waves do exist; I remember seeing a wall of water coming towards me, then a huge weight knocking me down and dragging me downwards across the rough granite. José had been on a higher level than me, I felt him grab me as I came to a stop a few feet above the sea. We hastened to climb out, me soaking wet, nearly hypothermic by the time we got to the top of the route, with the whole of the front of my body grazed and bloody, feeling like I had been dragged across a cheese-grater.

At some point in my 30s I realised I didn’t any more have the bottle to do big serious sea-cliff routes any more. One memorable day out with Brian Davison probably confirmed this; he had his eye on an unclimbed sea-stack close to Fishguard – Needle Rock. But to get to it we had to get to the bottom of a 200 foot cliff, also unclimbed. We abseiled as far as a 150 rope would take us. We had to descend the last 50 ft using the ropes we were going to climb with, so when we got to the gap between the cliff and the needle we had to pull them down after us. Now we had to get up the sea-stack and down again before the route back to the main cliff was cut off by the tide, and then find a new route on-sight to get back up the mainland cliff.

In the end it was fine – Brian led a good route up the sea-stack, which he named “Needless to say”. And there was a relatively straightforward route up the main cliff to be found, at about VS in grade. Brian is a superbly strong and resourceful climber; there is no-one I would trust more to get out of a sticky situation, and there really was nothing to worry about, but I could feel myself losing my cool and succumbing to anxiety and fear.

I think those routes were pretty much the last serious, extreme routes I’ve done on sea-cliffs. But sea-cliff climbing doesn’t always have to be like that. There is still joy to be had in gentle routes above quiet seas. And there is no better example of that than the route I started this piece with, Red Wall at Porthclais, still one of my favourite routes anywhere.

The gentler side of sea-cliff climbing. The author on his umpteenth ascent of Red Wall, Porthclais, near St David’s; this picture gives a much more accurate sense of the character of the route than the cover picture of the Mortlock guide!

How much can artificial intelligence and machine learning accelerate polymer science?

I’ve been at the annual High Polymer Research Group meeting at Pott Shrigley this week; this year it had the very timely theme “Polymers in the age of data”. Some great talks have really brought home to me both the promise of machine learning and laboratory automation in polymer science, as well as some of the practical barriers. Given the general interest in accelerated materials discovery using artificial intelligence, it’s interesting to focus on this specific class of materials to get a sense of the promise – and the pitfalls – of these techniques.

Debra Audis, from the USA’s National Institute of Standards and Technology, started the meeting off with a great talk on how to use machine learning to make predictions of polymer properties given information about molecular structure. She described three difficulties for machine learning – availability of enough reliable data, the problem of extrapolation outside the parameter space of the training set, and the problem of explainability.

A striking feature of Debra’s talk for me was its exploration of the interaction between old-fashioned theory, and new-fangled machine learning (ML). This goes in two directions – on the one hand, Debra demonstrated that incorporating knowledge from theory can greatly speed up the training of a ML model, as well as improving its ability to extrapolate beyond the training set. But given a trained ML model – essentially a black box of weights for your neural network, Debra emphasised the value of symbolic regression to convert the black box to a closed form expression of simple functional forms of the kind a theorist would hope to be able to derive from some physical principles, providing something a scientist might recognise as an explanation of the regularities that the machine learning model encapsulates.

But any machine learning model needs data – lots of data – so where does that data come from? One answer is to look at the records of experiments done in the past – the huge corpus of experimental data contained within the scientific literature. Jacqui Cole from Cambridge has developed software to extract numerical data, chemical reaction schemes, and to analyse images from the scientific data. For specific classes of (non-polymeric) materials she’s been able to create data sets with thousands of entries, using automated natural language processing to extract some of the contextual information that makes the data useful. Jacqui conceded that polymeric materials are particularly challenging for this approach; they have complex properties that are difficult to pin down to a single number, and what to the outsider may seem to be a single material (polyethylene for example) may actually be a category that encompasses molecules with a wider variety of subtle variations arising from different synthesis methods and reaction conditions. And Debra and Jacqui shared some sighs of exasperation at the horribly inconsistent naming conventions used by polymer science researchers.

My suspicion on this (informed a little by the outcomes of a large scale collaboration with a multinational materials company that I’ve been part of over the last five years) is that the limitations of existing data sets mean that the full potential of machine learning will only be unlocked by the production of new, large scale datasets designed specifically for the problem in hand. For most functional materials the parameter space to be explored is vast and multidimensional, so considerable thought needs to be given to how best to sample this parameter space to provide the training data that a good machine learning model needs. In some circumstances theory can help here – Kim Jelfs from Imperial described an approach where the outputs from very sophisticated, compute intensive theoretical models were used to train a ML model that could then interpolate properties at much lower compute cost. But we will always need to connect to the physical world and make some stuff.

This means we will need automated chemical synthesis – the ability to synthesise many different materials with systematic variation of the reactants and reaction conditions, and then rapidly determine the properties of this library of materials. How do you automate a synthetic chemistry lab? Currently, a synthesis laboratory consists of a human measuring out materials, setting up the right reaction conditions, then analysing and purifying the products, finally determining their properties. There’s a fundamental choice here – you can automate the glassware, or automate the researcher. In the UK, Lee Cronin at Glasgow (not at the meeting) has been a pioneer of the former approach, while Andy Cooper at Liverpool has championed the latter. Andy’s approach involves using commercial industrial robots to carry out the tasks a human researcher would do, while using minimally adapted synthesis and analytical equipment. His argument in favour of this approach is essentially an economic one – the world market for general purpose industrial robots is huge, leading to substantial falls in price, while custom built automated chemistry labs represent a smaller market, so one should expect slower progress and higher prices.

Some aspects of automating the equipment are already commercially available. Automatic liquid handling systems are widely available, allowing one, for example to pipette reactants into multiwell plates, so if one’s synthesis isn’t sensitive to air one can use this approach to do combinatorial chemistry. Adam Gormley from Rutgers described this approach for making a library of copolymers by an oxygen-tolerant adaptation of reversible addition−fragmentation chain-transfer polymerisation (RAFT), to produce libraries of copolymers with varying polymer molecular weight and composition. Another approach uses flow chemistry, in which reactions take place not in a fixed piece of glassware, but as the solvents containing the reactants travel down pipes, as described by Tanja Junkers from Monash, and Nick Warren from Leeds. This approach allows in-line reaction monitoring, so it’s possible to build in a feedback loop, adjusting the ingredients and reaction conditions on the fly in response to what is being produced.

It seems to me, as a non-chemist, that there is still a lot of specific work to be done to adapt the automation approach to any particular synthetic method, so we are still some way from a universal synthesis machine. Andy Cooper’s talk title perhaps alluded to this: “The mobile robotic polymer chemist: nice, but does it do RAFT?” This may be a chemist’s joke.

But whatever approach one has realised to be able to produce a library of molecules with different characteristics, and analyse their properties, there remains the question of how to sample what is likely to be a huge parameter space in order to provide the most effective training set for machine learning. We were reminded by the odd heckle from a very distinguished industrial scientist in the audience that there is a very classical body of theory to underpin this kind of experimental strategy – the Design of Experiments methodology. In these approaches, one selects the optimum set of different parameters in order most effectively to span parameter space.

But an automated laboratory offers the possibility of adapting the sampling strategy in response to the results as one gets them. Kim Jelfs set out the possible approaches very clearly. You can take the brute force approach, and just calculate everything – but this is usually prohibitively expensive in compute. You can use an evolutionary algorithm, using mutation and crossover steps to find a way through parameter space that optimises the output. Bayesian optimisation is popular, and generative models can be useful for taking a few more random leaps. Whatever the details, there needs to be a balance between optimisation and exploration – between taking a good formulation and making it better, and searching widely across parameter space for a possibly unexpected set of conditions that provides a step-change in the properties one is looking for.

It’s this combination of automated chemical synthesis and analysis, with algorithms for directing a search through parameter space, that some people call a “self-driving lab”. I think the progress we’re seeing now suggests that this isn’t an unrealistic aspiration. My somewhat tentative conclusions from all this:

  • We’re still a long way from an automated lab that can flexibly handle many different types of chemistry, so for a while its going to be a question of designing specific set-ups for particular synthetic problems (though of course there will be a lot of transferrable learning).
  • There is still lot of craft in designing algorithms to search parameter space effectively.
  • Theory still has its uses, both in accelerating the training of machine learning models, and in providing satisfactory explanations of their output.
  • It’s going to take significant effort, computing resource and money to develop these methods further, so it’s going to be important to select use cases where the value of an optimised molecule makes the investment worthwhile. Amongst the applications discussed in the meeting were drug excipients, membranes for gas separation, fuel cells and batteries, optoelectronic polymers.
  • Finally, the physical world matters – there’s value in the existing scientific literature, but it’s not going to be enough just to process words and text; for artificial intelligence to fulfil its promise for accelerating materials discovery you need to make stuff and test its properties.

Implications of Rachel Reeves’s Mais Lecture for Science & Innovation Policy

There will be a general election in the UK this year, and it is not impossible (to say the least) that the Labour opposition will form the next government. What might such a government’s policies imply for science and innovation policy? There are some important clues in a recent, lengthy speech – the 2024 Mais Lecture – given by the Shadow Chancellor of the Exchequer, Rachel Reeves, in which she sets out her economic priors.

In the speech, Reeves sets out in her view, the underlying problems of the UK economy – slow productivity growth leading to wage stagnation, low investment levels, poor skills (especially intermediate and technical) and “vast regional disparities, with all of England’s biggest cities outside London having productivity levels below the national average”. I think this analysis is now approaching being a consensus view – see, for example, this recent publication – The Productivity Agenda – from The Productivity Institute.

Interestingly, Reeves resists the temptation to blame everything on the current government, stressing that this situation reflects long-standing weaknesses, which began in the early 1990’s, which were not sufficiently challenged by the Labour governments of the late 90’s and 00’s, and then were made much worse in the 2010’s by Austerity, Brexit, and post-pandemic policy instability. Singling out Conservative Chancellor of the Exchequer Nigel Lawson as the author of policies that were both wrong in principle and badly executed, she identifies this period as the root of “an unprecedented surge in inequality between places and people which endures today. The decline or disappearance of whole industries, leaving enduring social and economic costs and hollowing out our industrial strength. And – crucially – diminishing returns for growth and productivity.”

To add to our problems, Reeves stresses that the external environment the UK now faces is much more challenging than in previous decades, with geopolitical instability reviving the basic question of national security, uncertainties from new technologies like AI, and the challenges of climate instability and the net zero energy transition. She is blunt in saying “globalisation, as we once knew it, is dead”“a growth model reliant on geopolitical stability is a growth model resting on increasingly shallow foundations.”

What comes next? For Reeves, the new questions are “how Britain can pay its way in the world; of our productive capacity; of how to drive innovation and diffusion throughout our economy; of the regional distribution of work and opportunity; of how to mobilise investment, develop skills and tackle inefficiencies to modernise a sclerotic economy; and of energy security”, and the answers are to be found what economist Dani Rodrik calls “productivism”.

In practise, this means an industrial strategy which, recognising the limits of central government’s information and capacity to act, works in partnership. This needs to have both a sector focus – building on the UK’s existing areas of comparative advantage and its strategic needs – and a regional focus, working with local and regional government to support the development of clusters and the realisation of agglomeration benefits.

In terms of the mechanics of the approach, Reeves anticipates that this central mission of government – restoring economic growth – will be driven from the Treasury, through a a beefed up “Enterprise and Growth” unit. To realise these ambitions, she identifies three areas of focus – recreating macroeconomic stability, investment – particularly in partnership with the private sector, and reform – of the planning system, housing, skills, the labour market and regional governance.

Innovation is a central part of Reeves’s vision for increased investment, partly through the familiar call for more capital to flow to university spin-outs. But there is also a call for more focus on the diffusion of new technologies across the whole economy, including what Reeves has long called the “everyday economy”. In my view, this is correct, but will need new institutions, or the adaptation of existing ones (as I argued, with Eoin O’Sullivan: “What’s missing in the UK’s R&D landscape – institutions to build innovation capacity”). There is a very sensible commitment to a ten year funding cycle for R&D institutions, essential not least because some confidence in the longevity of programmes is essential to give the private sector the confidence to co-invest.

This was quite a dense speech, and the commentary around it – including the pre-briefing from Labour – was particularly misleading. I think it would be a mistake to underestimate how much of a break it represents from the conventional economic wisdom of the past three decades, though the details of the policy programme remain to be filled in, and, as many have commented, its implementation in a very tough fiscal environment is going to be challenging. Our current R&D landscape isn’t ideally configured to support these aspirations and the UK’s current challenges (as I argue in my long piece “Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape”); I’d anticipate some reshaping to support the “missions” that are intended to give some structure to the Labour programme. And, as Reeves says unequivocally, of these missions, the goal of restoring productivity and economic growth is foundational.

Optical fibres and the paradox of innovation

Here is one of the foundational papers for the modern world – in effect, reporting the invention of optical fibres. Without optical fibres, there would be no internet, no on-demand video – and no globalisation, in the form we know it, with the highly dispersed supply chains that cheap and reliable information transmission between nations and continents that optical fibres make possible. This won a Nobel Prize for Charles Kao, a HK Chinese scientist then working in STL in Essex, a now defunct corporate laboratory.

Optical fibres are made of glass – so, ultimately, they come from sand – as Ed Conway’s excellent recent book, “Material World” explains. To make optical fibres a practical proposition needed lots of materials science to make glass pure enough to be transparent over huge distances. Much of this was done by Corning in the USA.

Who benefitted from optical fibres? The value of optical fibres to the world economy isn’t fully captured by their monetary value. Like all manufactured goods, productivity gains have driven their price down to almost negligible levels.

At the moment, the whole world is being wired with optical fibres, connecting people, offices, factories to superfast broadband. Yet, the the world trade in optical fibres is worth just $11 bn, less than 0.05% of total world trade. This is characteristic of that most misunderstood phenomenon in economics, Baumol’s so-called “cost disease”.

New inventions successively transform the economy, while innovation makes their price fall so far that, ultimately, in money terms they are barely detectable in GDP figures. Nonetheless,society benefits from innovations, taken for granted through ubiquity & low cost. (An earlier blog post of mine illustrates how Baumol’s “cost disease” works through a toy model)

To have continued economic growth, we need to have repeated cycles of invention & innovation like this. 30 years ago, corporate labs like STL were the driving force behind innovations like these. What happened to them?

Standard Telecommunication Laboratories in Harlow was the corporate lab of STC, Standard Telephones and Cables, a subsidiary of ITT, with a long history of innovation in electronics, telephony, radio coms & TV broadcasting in the UK. After a brief period of independence from 1982, STC was bought by Nortel, Canadian descendent of the North American Bell System. Nortel needed a massive restructuring after late 90’s internet bubble, & went bankrupt in 2009. The STL labs were demolished & are now a business park

The demise of Standard Communication Laboratories just one example of the slow death of UK corporate laboratories through the 90’s & 00’s, driven by changing norms in corporate governance and growing short-termism. These were well described in the 2012 Kay review of UK Equity Markets and Long-Term Decision Making. This has led, in my opinion, to a huge weakening of the UK’s innovation capacity, whose economic effects are now becoming apparent.

Deep decarbonisation is still a huge challenge

In 2019 I wrote a blogpost called The challenge of deep decarbonisation, stressing the scale of the economic and technological transition implied by a transition to net zero by 2050. I think the piece bears re-reading, but I wanted to update the numbers to see how much progress we had made in 4 years (the piece used the statistics for 2018; the most up-to-date current figures are for 2022). Of course, in the intervening four years we have had a pandemic and global energy price spike.

The headline figure is that the fossil fuel share of our primary consumption has fallen, but not by much. In 2018, 79.8% of our energy came from oil, gas and coal. In 2022, this share was 77.8%.

There is good news – if we look solely at electrical power generation, generation from hydro, wind and solar was up 32% 2018-2022, from 75 TWh to 99 TWh. Now 30.5% of our electricity production comes from renewables (excluding biomass, which I will come to later).

The less good news is that electrical power generation from nuclear is down 27%, from 65 TWh to 48 TWh, and this now represents just 14.7% of our electricity production. The increase in wind & solar is a real achievement – but it is largely offset by the decline in nuclear power production. This is the entirely predictable result of the AGR fleet reaching the end of its life, and the slow-motion debacle of the new nuclear build program.

The UK had 5.9 GW of nominal nuclear generation capacity in 2022. Of this, all but Sizewell B (1.2 GW) will close by 2030. In the early 2010’s, 17 GW of new nuclear capacity was planned – with the potential to produce more than 140 TWh per year. But, of these ambitious plans, the only project that is currently proceeding is Hinkley Point, late and over budget. The best we can hope for is that in 2030 we’ll have Hinkley’s 3.2 GW, which together with Sizewell B’s continuing operation could produce at best 38 TWh a year.

In 2022, another 36 TWh of electrical power – 11% – came from thermal renewables – largely burning imported wood chips. This supports a claim that more than half (56%) of our electricity is currently low carbon. It’s not clear, though, that imported biomass is truly sustainable or scaleable.

It’s easy to focus on electrical power generation. But – and this can’t be stressed too much – most of the energy we use is in the form of directly burnt gas (to heat our homes) and oil (to propel our cars and lorries).

The total primary energy we used in 2022 was 2055 TWh; and of this 1600 TWh was oil, gas and coal. 280 TWh (mostly gas) was converted into electricity (to produce 133 TWh of electricity), and 60 TWh’s worth of fossil fuel (mostly oil) was diverted into non-energy uses – mostly feedstocks for the petrochemical industry – leaving 1260 TWh to be directly burnt.

To achieve our net-zero target, we need to stop burning gas and oil, and instead use electricity. This implies a considerable increase in the amount of electricity we generate – and this increase all needs to come from low-carbon sources. There is good news, though – thanks to the second law of thermodynamics, we can convert electricity more efficiently into useful work than we can by burning fuels. So the increase in electrical generation capacity in principle can be a lot less than this 1260 TWh per year.

Projecting energy demand into the future is uncertain. On the one hand, we can rely on continuing improvements in energy efficiency from incremental technological advances; on the other, new demands on electrical power are likely to emerge (the huge energy hunger of the data centres needed to implement artificial intelligence being one example). To illustrate the scale of the problem, let’s consider the orders of magnitude involved in converting the current major uses of directly burnt fossil fuels to electrical power.

In 2022, 554 TWh of oil were used, in the form of petrol and diesel, to propel our cars and lorries. We do use some electricity directly for transport – currently just 8.4 TWh. A little of this is for trains (and, of course, we should long ago have electrified all intercity and suburban lines), but the biggest growth is for battery electrical vehicles. Internal combustion engines are heat engines, whose efficiency is limited by Carnot, whereas electric motors can in principle convert all inputted electrical energy into useful work. Very roughly, to replace the energy demands of current cars and lorries with electric vehicles would need another 165 TWh/year of electrical power.

The other major application of directly burnt fossil fuels is for heating houses and offices. This used 334 TWh/year in 2022, mostly in the form of natural gas. It’s increasingly clear that the most effective way of decarbonising this sector is through the installation of heat pumps. A heat pump is essentially a refrigerator run backwards, cooling the outside air or ground, and heating up the interior. Here the second law of thermodynamics is on our side; one ends up with more heat out than energy put in, because rather than directly converting electricity into heat, one is using it to move heat from one place to another.

Using a reasonable guess for the attainable, seasonally adjusted “coefficient of performance” for heat pumps, one might be able to achieve the same heating effect as we currently get from gas boilers with another 100 TWh of low carbon electricity. This figure could be substantially reduced if we had a serious programme of insulating old houses and commercial buildings, and were serious about imposing modern energy efficiency standards for new ones.

So, as an order of magnitude, we probably need to roughly double our current electricity generation capacity from its current value of 320 TWh/year, to more than 600 TWh/year. This will take big increases in generation from wind and solar, currently running around 100 TWh/year. In addition to intermittent renewables, we need a significant fraction of firm power, which can always be relied on, whatever the state of wind and sunshine. Nuclear would be my favoured source for this, so that would need a big increase from the 40 TWh/year we’ll have in place by 2030. The alternative would be to continue to generate electricity from gas, but to capture and store the carbon dioxide produce. For why I think this is less desirable for power generation (though possibly necessary for some industrial processes), see my earlier piece: Carbon Capture and Storage: technically possible, but politically and economically a bad idea.

Industrial uses of energy, which currently amount to 266 TWh, are a mix of gas, electricity and some oil. Some of these applications (e.g. making cement and fertiliser) are going to be rather hard to electrify, so, in addition to requiring carbon capture and storage, this may provide a demand for hydrogen, produced from renewable electricity, or conceivably process heat from high temperature nuclear reactors.

It’s also important to remember that a true reckoning of our national contribution to climate change would include taking account of the carbon dioxide produced in the goods and commodities we import, and our share of air travel. This is very significant, though hard to quantify – in my 2019 piece, I estimated that this could add as much as 60% to our personal carbon budget.

To conclude, we know what we have to do:

  • Electrify everything we can (heat pumps for houses, electric cars), and reduce demand where possible (especially by insulating houses and offices);
  • Use green hydrogen for energy intensive industry & hard to electrify sectors;
  • Hugely increase zero carbon electrical generation, through a mix of wind, solar and nuclear.

In each case, we’re going to need innovation, focused on reducing cost and increasing scale.

There’s a long way to go!

All figures are taken from the UK Government’s Digest of UK Energy Statistics, with some simplification and rounding.

The shifting sands of UK Government technology prioritisation

In the last decade, the UK has had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle. This 3500 word piece looks at this history of instability in UK innovation policy, and suggests some principles of consistency and clarity which might give us some more stability in the decade to come. A PDF version can be downloaded here.

Introduction

The problem of policy churn has been identified in a number of policy areas as a barrier to productivity growth in the UK, and science and innovation policy is no exception to this. The UK can’t do everything – it represents less than 3% of the world’s R&D resources, so it needs to specialise. But recent governments have not found it easy to decide where the UK should put its focus, and then stick to those decisions.

In 2012 this the then Science Minister, David Willetts, launched an initiative which identified 8 priority technologies – the “Eight Great Technologies”. Willetts reflected on the fate of this initiative in a very interesting paper published last year. In short, while there has been consensus on the need for the UK to focus (with the exception of one short period), the areas of focus have been subject to frequent change.

Substantial changes in direction for technology policy have occurred despite the fact that we’ve had a single political party in power since 2010, with particular instability since 2015, in the period of Conservative majority government. Since 2012, the average life-span of an innovation policy has been about 2.5 years. Underneath the headline changes, it is true that there have been some continuities. But given the long time-scales needed to establish research programmes and to carry them through to their outcomes, this instability makes it different to implement any kind of coherent strategy.

Shifting Priorities: from “Eight Great Technologies”, through “Seven Technology Families”, to “Five Critical Technologies”

Table 1 summarises the various priority technologies identified in government policy since 2012, grouped in a way which best brings out the continuities (click to enlarge).

The “Eight Great Technologies” were introduced in 2012 in a speech to the Royal Society by the then Chancellor of the Exchequer, George Osborne; a paper by David Willetts expanded on the rationale for the choices . The 2014 Science and Innovation Policy endorsed the “Eight Great Technologies”, with the addition of quantum technology, which, following an extensive lobbying exercise, had been added to the list in the 2013 Autumn Statement.

2015 brought a majority Conservative government, but continuity in the offices of Prime Minister and Chancellor of the Exchequer didn’t translate into continuity in innovation policy. The new Secretary of State in the Department of Business, Innovation and Skills was Sajid Javid, who brought to the post a Thatcherite distrust of anything that smacked of industrial strategy. The main victim of this world-view was the innovation agency Innovate UK, which was subjected to significant cut-backs, causing lasting damage.

This interlude didn’t last very long – after the Brexit referendum, David Cameron’s resignation and the premiership of Theresa May, there was an increased appetite for intervention in the economy, coupled with a growing consciousness and acknowledgement of the UK’s productivity problem. Greg Clark (a former Science Minister) took over at a renamed and expanded Department of Business, Energy and Industrial Strategy.

A White Paper outlining a “modern industrial strategy” was published in 2017. Although it nodded to the “Eight Great Technologies”, the focus shifted to four “missions”. Money had already been set aside in the 2016 Autumn Statement for an “Industrial Strategy Challenge Fund” which would support R&D in support of the priorities that emerged from the Industrial Strategy.

2019 saw another change of Prime Minister – and another election, which brought another Conservative government, with a much greater majority, and a rather interventionist manifesto that promised substantial increases in science funding, including a new agency modelled on the USA’s ARPA, and a promise to “focus our efforts on areas where the UK can generate a commanding lead in the industries of the future – life sciences, clean energy, space, design, computing, robotics and artificial intelligence.”

But the “modern industrial strategy” didn’t survive long into the new administration. The new Secretary of State was Kwasi Kwarteng, from the wing of the party with an ideological aversion to industrial strategy. In 2021, the industrial strategy was superseded by a Treasury document, the Plan for Growth, which, while placing strong emphasis on the importance of innovation, took a much more sector and technology agnostic approach to its support. The Plan for Growth was supported by a new Innovation Strategy, published later in 2021. This did identify a new set of priority technologies – “Seven Technology Families”.

2022 was the year of three Prime Ministers. Liz Truss’s hard-line free market position was certainly unfriendly to the concept of industrial strategy, but in her 44 day tenure as Prime Minister there was not enough time to make any significant changes in direction to innovation policy.

Rishi Sunak’s Premiership brought another significant development, in the form of a machinery of government change reflecting the new Prime Minister’s enthusiasm for technology. A new department – the Department for Innovation, Science and Technology – meant that there was now a cabinet level Secretary of State focused on science. Another significant evolution in the profile of science and technology in government was the increasing prominence of national security as a driver of science policy.

This had begun in the 2021 Integrated Review , which was an attempt to set a single vision for the UK’s place in the world, covering security, defence, development and foreign policy. This elevated “Sustaining strategic advantage through science and technology” as one of four overarching principles. The disruptions to international supply chains during the covid pandemic, and the 2022 invasion of Ukraine by Russia and the subsequent large scale European land war, raised the issue of national security even higher up the political agenda.

A new department, and a modified set of priorities, produced a new 2023 strategy – the Science & Technology Framework – taking a systems approach to UK science & technology . This included a new set of technology priorities – the “Five critical technologies”.

Thus in a single decade, we’ve had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle.

Continuities and discontinuities

There are some continuities in substance in these technology priorities. Quantum technology appeared around 2013 as an addendum to the “Eight Great Technologies”, and survives into the current “Five Critical Technologies”. Issues of national security are a big driver here, as they are for much larger scale programmes in the USA and China.

In a couple of other areas, name changes conceal substantial continuity. What was called synthetic biology in 2012 is now encompassed in the field of engineering biology. Artificial Intelligence has come to high public prominence today, but it is a natural evolution of what used to be called big data, driven by technical advances in machine learning, more computer power, and bigger data sets.

Priorities in 2017 were defined as Grand Challenges, not Technologies. The language of challenges is taken up in the 2021 Innovation Strategy, which proposes a suite of Innovation Missions, distinct from the priority technology families, to address major societal challenges, in areas such as climate change, public health, and intractable diseases. The 2023 Science and Technology Framework, however, describes investments in three of the Five Critical Technologies, engineering biology, artificial intelligence, and quantum technologies, as “technology missions”, which seems to use the term in a somewhat different sense. There is room for more clarity about what is meant by a grand challenge, a mission, or a technology, which I will return to below.

Another distinction that is not always clear is between technologies and industry sectors. Both the Coalition and the May governments had industrial strategies that explicitly singled out particular sectors for support, including through support for innovation. These are listed in table 2. But it is arguable that at least two of the Eight Great Technologies – agritech, and space & satellites – would be better thought of as industry sectors rather than technologies.

Table 2 – industrial strategy sectors, as defined by the Coalition, and the May government.

The sector approach did underpin major applied public/private R&D programmes (such as the Aerospace Technology Institute, and the Advanced Propulsion Centre), and new R&D institutions, such as the Offshore Renewable Catapult Centre, designed to support specific industry sectors. Meanwhile, under the banner of Life Sciences, there is continued explicit support from the pharmaceutical and biotech industry, though here there is a lack of clarity about whether the primary goal is to promote the health of citizens through innovation support to the health and social care system, or to support pharma and biotech as high value, exporting, industrial sectors.

But two of the 2023 “five critical technologies” – semiconductors and future telecoms – are substantially new. Again, these look more like industrial sectors than technologies, and while no one can doubt their strategic importance in the global economy it isn’t obvious that the UK has a particularly strong comparative advantage in them, either in the size of the existing business base or the scale of the UK market (see my earlier discussion of the background to a UK Semiconductor Strategy).

The story of the last ten years, then, is a lack of consistency, not just in the priorities themselves, but in the conceptual basis for making the prioritisation – whether challenges or missions, industry sectors, or technologies.

From strategy to implementation

How does one turn from strategy to implementation: given a set of priority sectors, what needs to happen to turn these into research programmes, and then translate that research into commercial outcomes? An obvious point that nonetheless needs stressing, is that this process has long lead times, and this isn’t compatible with innovation strategies that have an average lifetime of 2.5 years.

To quote the recent Willetts review of the business case process for scientific programmes: “One senior official estimated the time from an original idea, arising in Research Councils, to execution of a programme at over two and a half years with 13 specific approvals required.” It would obviously be desirable to cut some of the bureaucracy that causes such delays, but it is striking that the time taken to design and initiate a research programme is of the same order as the average lifetime of an innovation strategy.

One data point here is the fate of the Industrial Strategy Challenge Fund. This was announced in the 2016 Autumn Statement, anticipating the 2017 Industrial Strategy White Paper, and exists to support translational research programmes in support of that Industrial Strategy. As we have seen, this strategy was de-emphasised in 2019, and formally scrapped in 2021. Yet the research programmes set up to support it are still going, with money still in the budget to be spent in FY 24/25.

Of course, much worthwhile research will be being done in these programmes, so the money isn’t wasted; the problem is that such orphan programmes may not have any follow-up, as new programmes on different topics are designed to support the latest strategy to emerge from central government.

Sometimes the timescales are such that there isn’t even a chance to operationalise one strategy before another one arrives. The major public funder of R&D, UKRI, produced a five year strategy in March 2022 , which was underpinned by the seven technology families. To operationalise this strategy, UKRI’s constituent research councils produced a set of delivery plans . These were published in September 2022, giving them a run of six months before the arrival of the 2023 Science and Innovation Framework, with its new set of critical technologies.

A natural response of funding agencies to this instability would be to decide themselves what best to do, and then do their best to retro-fit their ongoing programmes to new government strategies as they emerge. But this would defeat the point of making a strategy in the first place.

The next ten years

How can we do better over the next decade? We need to focus on consistency and clarity.

Consistency means having one strategy that we stick to. If we have this, investors can have confidence in the UK, research institutions can make informed decisions about their own investments, and individual researchers can plan their careers with more confidence.

Of course, the strategy should evolve, as unexpected developments in science and technology appear, and as the external environment changes. And it should build on what has gone before – for example, there is much of value in the systems approach of the 2023 Science and Innovation Framework.

There should be clarity on the basis for prioritisation. I think it is important to be much clearer about what we mean by Grand Challenges, Missions, Industry Sectors, and Technologies, and how they differ from each other. With sharper definitions, we might find it easier to establish clear criteria for prioritisation.

For me, Grand Challenges establish the conditions we are operating under. Some grand challenges might include:

  • How to move our energy economy to a zero-carbon basis by 2050;
  • How to create an affordable and humane health and social care system for an ageing population;
  • How to restore productivity growth to the UK economy and reduce the UK’s regional disparities in economic performance;
  • How to keep the UK safe and secure in an increasingly unstable and hostile world.

One would hope that there was a wide consensus about the scale of these problems, though not everyone will agree, nor will it always be obvious, what the best way of tackling them is.

Some might refer to these overarching issues as missions, using the term popularised by Mariana Mazzacuto , but I would prefer to refer to a mission as something more specific, with a sense of timescale and a definite target. The 1960’s Moonshot programme is often taken as an exemplar, though I think the more significant mission from that period was to create the ability for the USA to land a half tonne payload anywhere on the earth’s surface, with an accuracy of a few hundred meters or better.

The crucial feature of a mission, then, is that it is a targeted program to achieve a strategic goal of the state, that requires both the integration and refinement of existing technologies and the development of new ones. Defining and prioritising missions requires working across the whole of government, to identify the problems that the state needs to be solved, and that are tractable enough given reasonable technology foresight to be worth trying, and prioritising them.

The key questions for a judging missions, then, are, how much does the government want this to happen, how feasible is it given foreseeable technology, how well equipped is the UK to deliver it given its industrial and research capabilities, and how affordable is it?

For supporting an industry sector, though, the questions are different. Sector support is part of an active industrial strategy, and given the tendency of industry sectors to cluster in space, this has a strong regional dimension. The goals of industrial strategy are largely economic – to raise the economic productivity of a region or the nation – so the criteria for selecting sectors should be based on their importance to the economy in terms of the fraction of GVA that they supply, and their potential to improve productivity.

In the past industrial strategy has often been driven by the need to create jobs, but our current problem is productivity, rather than unemployment, so I think the key criteria for selecting sectors should be their potential to create more value through the application of innovation and the development of skills in their workforces.

In addition to the economic dimension, there may also be a security aspect to the choice, if there is a reason to suppose that maintaining capability in a particular sector is vital to national security. The 2021 nationalisation of the steel forging company, Sheffield Forgemasters, to secure the capability to manufacture critical components for the Royal Navy’s submarine fleet, would have been unthinkable a decade ago.

Industrial strategy may involve support for innovation, for example through collaborative programmes of pre-competitive research. But it needs to be broader than just research and development; it may involve developing institutions and programmes for innovation diffusion, the harnessing of public procurement, the development of specialist skills provision, and at a regional level, the provision of infrastructure.

Finally, on what basis should we choose a technology to focus on? By a technology priority, we refer to an emerging capability arising from new science, that could be adopted by existing industry sectors, or could create new, disruptive sectors. Here an understanding of the international research landscape, and the UK’s part of that, is a crucial starting point. Even the newest technology, to be implemented, depends on existing industrial capability, so the shape of the existing UK industrial base does need to be taken account. Finally, one shouldn’t underplay the importance of the vision of talented and driven individuals.

This isn’t to say that priorities for the whole of the science and innovation landscape need to be defined in terms of challenges, missions, and industry sectors.
A general framework for skills, finance, regulation, international collaboration, and infrastructure – as set out by the recent Science & Innovation Framework – needs to underlie more specific prioritisation. Maintaining the health of the basic disciplines is important to provide resilience in the face of the unanticipated, and it is important to be open to new developments and maintain agility in responding to them.

The starting point for a science and innovation strategy should be to realise that, very often, science and innovation shouldn’t be the starting point. Science policy is not the same as industrial strategy, even though it’s often used as a (much cheaper) substitute for it. For challenges and missions, defining the goals must come first; only then can one decide what advances in science and technology are needed to bring those in reach. Likewise, in a successful industrial strategy, close engagement with the existing capabilities of industry and the demands of the market are needed to define the areas of science and innovation that will support the development of a particular industry sector.

As I stressed in my earlier, comprehensive, survey of the UK Research and Development landscape, underlying any lasting strategy needs to be a settled, long-term view of what kind of country the UK aspires to be, what kind of economy it should have, and how it sees its place in the world.