Research and Innovation in a Labour government

Above all, growth. The new government knows that none of its ambitions will be achievable without a recovery from the last decade and a half’s economic stagnation. Everything will be judged by the contribution it can make to that goal, and research and innovation will be no exception.

The immediate shadow that lies over UK public sector research and innovation is the university funding crisis. The UK’s public R&D system is dependent on universities to an extent that’s unusual by international standards, and university research depends on a substantial cross-subsidy, largely from overseas student fees, which amounted to £4.9 bn in 2020. The crisis in HE is on Sue Gray’s list of unexploded bombs for the new government to deal with.

But it’s vital for HE to be perceived, not just as a problem to be fixed, but as central to the need to get the economy growing again. This is the first of the new Government’s missions, as described in the Manifesto: “Kickstart economic growth to secure the highest sustained growth in the G7 – with good jobs and productivity growth in every part of the country making everyone, not just a few, better off.”

To understand how the government intends to go about this, we need to go back to the Mais Lecture, given this March by the new Chancellor of the Exchequer. As I discussed in an earlier post, the questions Reeves poses in her Mais Lecture are the following: “how Britain can pay its way in the world; of our productive capacity; of how to drive innovation and diffusion throughout our economy; of the regional distribution of work and opportunity; of how to mobilise investment, develop skills and tackle inefficiencies to modernise a sclerotic economy; and of energy security”.

Reeves calls her approach to answering these questions “securonomics”; this owes much to what the US economist Dani Rodrik calls “productivism”. At the centre of this will be an industrial strategy, with both a sector focus and a regional focus.

The sector focus is familiar, supporting areas of UK comparative advantage: “our approach will back what makes Britain great: our excellent research institutions, professional services, advanced manufacturing, and creative industries”.

The regional aspect aims to develop clusters and seeking to unlock the potential agglomeration benefits in our underperforming big cities, and connects to a wider agenda of further English regional devolution, building on the Mayoral Combined Authority model.

There is “a new statutory requirement for Local Growth Plans that cover towns and cities across the country. Local leaders will work with major employers, universities, colleges, and industry bodies to produce long-term plans that identify growth sectors and put in place the programmes and infrastructure they need to thrive. These will align with our national industrial strategy.”

Universities need to at the heart of this. The pressure will be on them, not just to produce more spin-outs and work with industry, but also to support the diffusion of innovation across their regional economies. There are no promises of extra money for science – instead, as in other areas, the implicit suggestion seems to be that policy stability itself will yield better value:

“Labour will scrap short funding cycles for key R&D institutions in favour of ten-year budgets that allow meaningful partnerships with industry to keep the UK at the forefront of global innovation. We will work with universities to support spinouts; and work with industry to ensure start-ups have the access to finance they need to grow. We will also simplify the procurement process to support innovation and reduce micromanagement with a mission-driven approach.”

Beyond the government’s growth imperative, its priorities are defined by its other four missions; in clean energy, tackling crime, widening opportunities for people, and rebuilding the healthcare system. Research and Innovation, and the HE sector more widely, need to play a central role in at least three of these missions.

A commitment to cheap, zero carbon electricity by 2030 is a very stretching target, despite some advantages: “our long coast-line, high winds, shallow waters, universities, and skilled offshore workforce combined with our extensive technological and engineering capabilities.” Here the “strategy” part of industrial strategy is going to be vital – getting the balance right between the technologies that the UK will develop itself and those it imports from international balance will be vital. The call is to double onshore wind, triple solar, and quadruple offshore wind. There is a commitment to new nuclear build, including small modular reactors, and recognition of the importance of upgrading the grid and improving home insulation. R&D will need to be focused to support renewables, new nuclear and grid upgrades.

In health, commitments to address health inequalities imply higher priority on prevention, with high hopes placed on data and AI: “the revolution taking place in data and life sciences has the potential to transform our nation’s healthcare. The Covid-19 pandemic showed how a strong mission-driven industrial strategy, involving government partnering with industry and academia, could turn the tide on a pandemic. This is the approach we will take in government.” This statement gains more significance following the appointment of Sir Patrick Vallance as Science Minister, as I’ll discuss below.

There’s long been a tension between the high hopes that a succession of UK governments have placed on a strong life sciences sector, and a perception that the NHS is an organisation that’s not particularly innovative. So it’s unsurprising to read that “as part of Labour’s life sciences plan, we will develop an NHS innovation and adoption strategy in England. This will include a plan for procurement, giving a clearer route to get products into the NHS coupled with reformed incentive structures to drive innovation and faster regulatory approval for new technology and medicines.” I am sure this is correct in principle, and many such opportunities exist, but it will be difficult to take this forward until the immediate funding crisis faced by most parts of the NHS is overcome.

The new government’s fourth mission is to “break down barriers to opportunity”. A big part of this is to reform post-16 education (in England, one should add, as education is a devolved responsibility in Wales, Scotland and Northern Ireland). Universities will need to get used to there being more focus on the neglected FE sector, from which specialised “Technical Excellence Colleges” will be created, and should ready themselves for a more collaborative relationship with their neighbouring FE colleges: “to better integrate further and higher education, and ensure high-quality teaching, Labour’s post-16 skills strategy will set out the role for different providers, and how students can move between institutions, as well as strengthening regulation.”

There’s one important priority that wasn’t in the original list of five missions, but can’t now be ignored: the threatening geopolitical situation inevitably means a renewed focus on defence. The new government is explicit about the role of the defence industrial base in this:

“Strengthening Britain’s security requires a long-term partnership with our domestic defence industry. Labour will bring forward a defence industrial strategy aligning our security and economic priorities. We will ensure a strong defence sector and resilient supply chains, including steel, across the whole of the UK. We will establish long-term partnerships between business and government, promote innovation, and improve resilience.”

As the MoD budget grows, defence R&D will grow in importance. It’s perhaps not widely enough appreciated how much, following the end of the Cold War, the major focus of the UK’s research effort switched from defence to health and life sciences, so this will represent a partial turn-around of a decades-long trend.

How is the new government actually going to achieve these ambitious goals? Much stock is being placed on “mission led government”, in which Whitehall departments effortlessly collaborate to deliver goals which cross the boundaries between departments. In its first day, the new government made one unexpected announcement, which I think offers a clue as to how serious it is about this. That was the appointment of Sir Patrick Vallance as Science Minister.

Vallance, of course, has an outstanding background to be a Science Minister, as a highly successful researcher who then led R&D at one of the UK’s few world-class innovation led multinationals, GlaxoSmithKline. But, in the context of the new government’s ambitions, I think his most significant achievement, as Government Chief Scientific Advisor in the covid pandemic, was to set-up the Vaccine Task Force. If that’s going to be a model for how “mission led government” might work, we might see some exciting and rapid developments.

Research and innovation has a huge part to play in addressing the pressing challenges that face the new government, which necessarily cross Whitehall fiefdoms. The ambition in setting up the Department of Science, Innovation and Technology was to have a department coordinating science and innovation across the whole of government; it’s difficult to imagine anyone better qualified to realise this ambition than Vallance.

Quotations from the 2024 Labour Manifesto.

Revisiting the UK’s nuclear AGR programme: 1. On the uses of White Elephants

This is the first of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government.

In this first post, I’ll explore the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects. A second post will dig into the thinking that led up to the AGR decision. This will include a discussion of the basic physics that underlies nuclear reactor design, but it also needs to understand the historical context – and in particular, the way the deep relationship between the UK’s civil nuclear programme and the development of its indigenous nuclear weapons programme steered the trajectory of technology development. In a third post, I’ll consider how this historical legacy has influenced the UK’s stuttering efforts since 2008 to develop a new nuclear build programme, and try to draw some more general lessons.

There’s now a wide consensus that a big part of the UK’s productivity problem stems from its seeming inability to build big infrastructure. At a panel discussion about the UK’s infrastructure at the annual conference of the Bennett Institute, former Number 10 advisor Giles Wilkes estimated that the UK now has a £500 bn accumulated underinvestment in infrastructure, and identified HM Treasury as a key part of the system that has led to this. He concluded with three assertions:

1. “Anything we can do, we can afford”. A saying attributed to Keynes, to emphasise that money isn’t really the problem here – it is the physical capacity, skills base and capital stock needed to build things that provides the limit on getting things done.
2. Why haven’t we got any White Elephants? On the contrary, projects that were widely believed to be White Elephants when they were proposed – like the Channel Tunnel and Crossrail – have turned out to be vital. As Giles says, HM Treasury is very good at stopping things, so perhaps the problem is that HMT’s morbid fear of funding “White Elephants” is what is blocking us from getting useful, even essential, projects built.
3. The UK needs to show some humility. We should take time to understand how countries like Spain and Italy manage to build infrastructure so much more cheaply (often through more statist approaches).

Where does HM Treasury’s morbid fear of White Elephant infrastructure projects come from? I suspect a highly influential 1977 article by David Henderson – Two British Errors: Their Probable Size and Some Possible Lessons – lies at the root of this. The two errors in question were the Anglo-French Concorde programme, to build a supersonic passenger aircraft, and the Advanced Gas-cooled Reactor (AGR) programme of nuclear power stations.

It’s now conventional wisdom to point to Concorde and the AGR programme as emblems of UK state technological hubris and the failure of the industrial policy of the 1960s and 70s. The shadow of this failure is a major cultural blockage for any kind of industrial strategy.

Concorde was unquestionably a commercial failure, retired in 2003. But the AGR fleet is still running; they produce about 60 TWh of non-intermittent, low carbon power; in 2019 their output was equal in scale to the entire installed wind power base. The AGR fleet is already well beyond the end of its design life; all will be retired by the end of the decade, likely before any nuclear new build comes on stream – we will miss them when they are gone.

The most expensive error by the UK state? The bar on that has been raised since 1977.

The AGR programme has been described as one of the most expensive errors made by the UK state, largely on the strength of Hendersons’s article. Henderson was writing in 1977, so it’s worth taking another look at the programme as it looks forty years on. How big an error was it? The building of the AGR fleet was undoubtedly very badly managed, with substantial delays and cost overruns. Henderson’s upper estimate of the total net loss to be ascribed to the AGR programme was £2.1 billion.

What is striking now about this sum is how small it is, in the context of the more of recent errors. In 2021 money, this would correspond to a bit less than £14bn. A fairer comparison perhaps would be to express it as a fraction of GDP – in these terms it would amount to about £30bn. A relevant recent comparator to this is the net cost to the UK of energy price support following the gas price spike that the Ukraine invasion caused – this was £38.3bn (net of energy windfall taxes, some of which were paid by EDF in respect of the profits produced by the AGR fleet). Failing to secure the UK’s energy security was arguably a bigger error than the AGR programme.

“No-one knows anything” – Henderson’s flawed counterfactual, and the actual way UK energy policy turned out

In making his 1977 estimate of the £2.1bn net loss to the UK from adopting the AGR programme, Henderson had to measure the programme against a counterfactual. At the time, the choices were, in effect, two-fold. The counterfactual Henderson used for his estimate of the excess cost of the AGR programme was of building out a series of light water reactors, importing US technology. Underneath this kind of estimate, then, is an implicit confidence about the limited number of paths down which the future will unfold. The actual future, however, does not tend to cooperate with this kind of assumption.

Just two years after Henderson’s paper, the global landscape for civil nuclear power dramatically changed. In 1979 a pressurised water reactor (a type of light water reactor) at Three Mile Island, in the USA, suffered a major loss of coolant accident. No-one was killed, but the unit was put permanently out of commission, and the clean-up costs have been estimated at about $1 billion. A much more serious accident happened in 1986, in Chernobyl, Ukraine, then in the Soviet Union. There was a loss of control in a reactor of a fundamentally different design to light water reactors, an RBMK, which led to an explosion and fire, which dispersed a substantial fraction of the radioactive core into the atmosphere. This resulted in 28 immediate deaths and a cloud of radioactive contamination which extended across the Soviet Union into Eastern Europe and Scandinavia, with measurable effects in the UK. I’ll discuss in the next post the features of these reactor designs that leave them vulnerable to these kind of accidents. These accidents led both to a significant loss of public trust in nuclear power, and a worldwide slowdown in the building of new nuclear power plants.

Despite Three Mile Island, having given up on the AGR programme, the UK government decided in 1980 to build a 1.2 GW pressurised water reactor of US design at Sizewell, in Suffolk. This came on line in 1995, after a three year public inquiry and an eight year building period, and at a price of £2 billion in 1987 prices. Henderson’s calculation of the cost of his counterfactual, where instead of building AGRs the UK had built light water reactors, was based on an estimate for the cost of light water reactors £132 per kW at 1973 prices, on which basis he would have expected Sizewell B to cost around £800m in 1987 prices. Nuclear cost and time overruns are not limited to AGRs!

Sizewell B was a first of a kind reactor, so one would expect subsequent reactors built to the same design to reduce in price, as supply chains were built up, skills were developed, and “learning by doing” effects took hold. But Sizewell B was also a last of a kind – no further reactors were built in the UK until Hinkley Point C, which is still under construction

The alternative to any kind of civil nuclear programme would be to further expand fossil fuel power generation – especially coal. It’s worth stressing here that there is a fundamental difference between the economics of generating electricity through fossil fuels and nuclear. In the case of nuclear power, there are very high capital costs (which include provision for decommissioning at the end of life), but the ongoing cost of running the plants and supplying nuclear fuel is relatively small. In contrast, fossil fuel power plants have lower initial capital costs, but a much higher exposure to the cost of fuel.

Henderson was writing at a time when the UK’s electricity supply was dominated by coal, which accounted for around three quarters of generation, with oil making a further significant contribution. The mid-seventies were a time of energy crisis, with seemingly inexorable rises in the cost of all fossil fuels. The biggest jump was in oil prices following the 1973 embargo, but the real price of coal was also on a seemingly inexorable rising trajectory. In these circumstances, the growth of nuclear power in some form seemed irrestistible.

Economics is not all that matters for energy policy – politics often takes precedence. Margaret Thatcher came to power in 1980, determined to control the power of the unions – and in particular, the National Union of Mineworkers. After her re-election in 1983, the run-down of UK coal mining led to the bitter events of the 1984-85 miners’ strike. Despite the fact that coal fired power plants still accounted for around 70% of generating capacity, the effects of the miners’ strike were mitigated by a conscious policy of stock-piling coal prior to the dispute, more generation from oil-fired power stations, and a significant ramp up in output from nuclear power plants. Thatcher was enthusiastic about nuclear power – as Dieter Helm writes, “Nuclear power, held a fascination for her: as a scientist, for its technical achievements; as an advocate for a strong defence policy; and, as an opponent of the miners, in the form of an insurance policy”. She anticipated a string of new pressurised water reactors to follow Sizewell B.

But Thatcher’s nuclear ambitions were in effect thwarted by her own Chancellor of the Exchequer, Nigel Lawson. Lawson’s enthusiasm for privatisation, and his conviction that energy was just another commodity, whose efficient supply was most effectively guaranteed by the private sector operating through market mechanisms, coincided with a period when fossil fuel prices were steadily falling. Going into the 1990’s, the combination of newly abundant North Sea gas and efficient combined cycle gas turbines launched the so-called “dash for gas”; in this decade natural gas’s share of electricity generation capacity had risen from 1.3% to nearly 30% in 2000. Low fossil fuel prices together with high interest rates made any new nuclear power generation look completely uneconomic.

Two new worries – the return of the energy security issue, and the growing salience of climate change

Two things changed this situation, leading policy makers to reconsider the case for nuclear power. Firstly, as was inevitable, the North Sea gas bonanza didn’t last for ever. UK gas production peaked in 2001, and by 2004 the UK was a net importer. Nonetheless, a worldwide gas market was opening up, due to a combination of the development of intercontinental pipelines (especially from Russia), and an expanding market in liquified natural gas carried by tanker from huge fields in, for example, the Middle East. But for a long time policy-makers were relaxed about this growing import dependency – the view was that “the world is awash with natural gas”. It was only the gas price spike, that begun in 2021 and was intensified by Russia’s invasion of Ukraine, that made energy security an urgent issue again.

More immediately, there was a growing recognition of the importance of climate change. The UK ratified the Kyoto Protocol in 2002, committing itself to binding reductions in the production of greenhouse gases. The UK’s Chief Scientific Advisor at the time, Sir David King, was particularly vocal in raising the profile of Climate Change. The UK’s rapid transition from coal to gas was helpful in reducing. overall emissions, but towards the end of the decade the role of nuclear energy was revisited, with a decision in principle to support nuclear new build in a 2008 White Paper.

We’re now 16 years on from that decision in principle to return to nuclear power, but the UK has still not completed a single new nuclear power reactor – a pair is under construction at Hinkley Point. I’ll return to the UK’s ill-starred nuclear new build program and its future prospects in my third post. But, next, I want to go back to the original decision to choose advanced gas cooled reactors. This has recently been revisited & analysed by Thomas Kelsey in When Missions Fail: Lessons in “High Technology” from post-war Britain–2023-056-When-Missions-Fail.pdf. His key lesson is that the decision making process was led by state engineers and technical experts. In my next post, I’ll discuss how design choices are influenced both by the constraints imposed by the physics of nuclear reactions, and by the history that underpinned a particular technological trajectory. In the UK’s case, that history was dominated – to a degree that was probably not publicly apparent at the time – by the UK’s decision to develop an independent nuclear weapons programme, and the huge resources that were devoted to that enterprise.

All things begin & end on Albion’s Rocky Druid shore

I’m 63 now, so the idea that I should still be taking part in “adventure sports” is perhaps a little ridiculous. Nonetheless, rock climbing has been so much part of my life for so long that I still try and get out, generally for easy short climbs on the gritstone cliffs near my home in Derbyshire. There are things that I’ve done in my younger days that I have put behind me without much regret – I won’t be climbing frozen waterfalls in New England again, or winter climbing in the Lakes or Scotland. I do miss snowy mountains a bit, though I know I will never be a serious alpinist. But there’s one variety of cllmbing that I think is very special, that I look back on with real pleasure, and that I think maybe I should try to involve myself in once again, even if at a much lower level than before. That is rock climbing on Britain’s sea-cliffs, a branch of the pastime with its own unique atmosphere and set of demands.

I started rock climbing seriously when I was 14 or so; at that time it was my family’s habit to spend every summer in St Davids, Pembrokeshire, near where my mother had grown up. The coastline of Pembrokeshire is spectacular – a succession of coves, headlands, and cliffs, pounded by the open Atlantic waves. At the time, the idea of climbing the cliffs of Pembrokeshire was in its infancy. Rock climbing on the granite cliffs of Cornwall was well-established, and the counter-cultural climbing scene of North Wales had created hard and serious routes on the sea-cliffs of Gogarth, on Anglesea. But what little climbing on the cliffs of Pembrokeshire was recorded in a slim guidebook by Colin Mortlock, published in 1974, not by the Climbers Club or any of the establishment sources of climbing information, but by a local publishing house more associated with postcards and wildlife guides than rock climbing.

The first ever guidebook to climbing in Pembrokeshire, by Colin Mortlock. Just 150 pages long (the current guidebook runs to 5 volumes), it often failed in the basic function of telling one where the routes go (and, in one or two cases, even where the cliffs actually are), but was a source of great inspiration. The cover photograph is of Colin Mortlock himself climbing “Red Wall” at Porthclais.

My imagination was seized by the cover of this book, showing Mortlock himself powering up a sheer, apparently overhanging, wall above a boiling sea. The route was called “Red Wall”, and was graded “severe” – that was the kind of climbing I wanted to do. In 1977 I persuaded my school friend and climbing partner Mark Miller to come and stay with my family in Pembrokeshire so we could give this sea-cliff climbing business a try.

Mark and I were, by that time, reasonably confident climbers up to grades of severe, with some level of basic competence at rope work and protection, and in possession of the basic gear – ropes, harnesses, the nuts and slings that were state of the art at the time. We studied the guidebook and looked at the picture. It looked steep – but surely, if it were that overhanging, the holds must be good. We’d done routes like that on the gritstone cliffs of Derbyshire, we thought – tough routes for the grade, but within our grasp.

But we’d misjudged it. The cover picture turned out to wildly tilted; it’s an off-vertical slab, maybe 70 degrees or so, blessed with perfect sharp, incut finger holds. We romped up it. Severe? It would barely be V. Diff in the Peak District! But it remains one of my favourite routes – I’ve probably done it twenty times since then. Few routes capture so completely the joy of sea-cliff climbing at its friendliest, with easy access to the base of the route, clear blue water sloshing gently below one’s feet, lichen and rock samphire on beautiful pink rock, footholds and handholds in all the right places.

Mark and I got better and more experienced at climbing. By the time we left school I was a confident leader of climbs VS in grade, tentatively trying things that were a bit harder. Mark had by force of will converted himself into an extreme leader, with a specialism in bold, protection-less slabs. In the summer before I went to University, in 1980, we persuaded a relatively new friend, Peter Carter, to come with us to Cornwall and Devon. Or, more accurately, we persuaded Peter to take us there – recently discharged from the Royal Marines, he had the unique asset of owning, and knowing how to drive, a small van.

Our trip started at the very tip of Cornwall – on the granite cliffs of West Penwith. We did some fine climbs on the traditional cliffs of solid granite, like Bosigran and Chair Ladder. But it was on the return trip that our sea-cliff horizons were truly expanded. A bleak headland near the north coast village of St Agnes is known to climbers as Carn Gowla, with three hundred foot cliffs falling vertically into the deep sea.

The route we chose was a HVS called Mercury. The first problem is getting to the base of the route – the only way was to abseil. We tied two 150ft 9 mm ropes together, anchored them to a good thread in the slope above the groove, and set off down. At the bottom, a ledge about twenty feet above the waves, there’s a huge sense of commitment – the easiest way out is the route Mercury, all 270 ft of it. In the end, the technical difficulties weren’t beyond us, though the exposure, commitment, and the dubious, vegetated rock were very far from the friendly crags of the Peak District.

Another highlight of that trip was my first encounter with the spectacular scenery on the stretch of coast north from Bude to Hartland. Known as the Culm Coast, it’s composed of thinly bedded sandstones and shales that have been dramatically folded, and then sliced abruptly by the sea. Not only is it the most dramatic coastal scenery in England, it also provides a variety of great climbs, ranging from short and solid sea-washed slabs to 400 foot climbs, almost of mountain scale, on rock whose solidity is not above suspicion. I’ve returned to it again and again.

There’s something uniquely memorable, I think, about sea cliff climbs, and even decades on I vividly remember the climbs and the people I did with them with. On the Culm Coast there’s a 400 ft climb called Wrecker’s Slab. The first time I did it was with my college friend Jonathan Sharp, I think just a few months before he tragically died in the Alps. It wasn’t hard, but its scale and looseness gave it quite a reputation, well-deserved.

In Pembrokeshire, amongst the cliffs north of St Davids, Trwyn Llwyd is a fabulous buttress of solid gabbro. I did Barad with Sean Smith; its crux felt like a VS gritstone jamming crack – 200 feet directly above the sea. Craig Coetan is a much easier crag, above a little inlet which attracts curious seals. In my teenage years I explored these gentle slabs with my father.

Back in the Culm coast, the hardest route I did was with my old and much missed friend, the late Mark Miller. Blackchurch is a crag with a sinister atmosphere that entirely lives up to its name; Archtempter is one of the classics of the main cliff – a soaring groove line now graded E3. Mark did the first pitch, thin and loose, and I led the widening crack above through an overhang. At the top, we so far forgot ourselves to shake hands.

Blackchurch, North Devon. The obvious groove is the line of “Archtempter”; the (just visible) climbers are Mark Miller at the halfway stance, and above him the author, just about to enter the overhanging section. It’s not a great photo, but it does convey something of the demonic atmosphere of this crag.

Looking for new routes provides another, exploratory dimension to sea-cliff climbing; I had many memorable trips with Brian Davison, who believed that the purpose of guide books was to tell you where not to climb. In the Lleyn Peninsula, we did one of the earliest routes up Craig Dorys; we called it “Error of Judgement”. As the guidebook says: “It certainly was, an appallingly loose line”.

In North Pembrokeshire Penbwchdy is a long headland with a long run of big, vegetated cliffs. I’d been there with Jonathan Sharp but failed to get up anything – we’d scrambled down a grassy slope, done a 150 ft abseil to sea level to find our way forward was to cross a deep but narrow inlet on the remains of a wrecked ship. Not relishing the idea of balancing across on an old propeller shaft, over which waves were breaking, we went back the way we came.

The great pioneer of sea-cliff climbing, Pat Littlejohn, had a done a route at the far end of Penbwchdy, on a section of cliff he called New World Wall, accessed by a long low-tide sea level traverse after the shipwreck crossing that Jonathan and I had balked at. Done in 1974, I suspect Terranova, as the route was called, hadn’t had a lot of repeats, given the awkward approach. But Brian and I later found another way down to New World Wall, with some careful route finding and a final scramble. Brian led a new route up this, which he called “New Dawn Fades”, at E4, a good onsight lead up a steep groove.

The best new route I ever did was on the sandstone cliffs south of St Davids, a couple of miles east of Porthclais. A pamphlet describing new routes reported a new crag on the headland near Caerfai, with a HVS called “Amorican”, now a classic and often repeated route. I kicked myself – I’d walked past that crag innumerable times but never noticed its potential. But to the right of the crack of Amorican is a sweeping concave slab of sandstone, unclimbed in 1984. Climbing with Mary Rack, I found a circuitous line; a thin sloping crack demanded 20 ft of intricate and precise footwork, with only tiny holds for the hands. I called it “Uncertain Smile”.

Sea cliff climbing undoubtedly has more danger than the landward variety – loose rock, tidal conditions, big waves. One experience in Cornwall was the closest I have (knowingly) come to dying. My climbing partner was José Luis Bermudez; we were staying at the Climbers Club hut at Bosigran, where I remember being hubristically superior, as experienced climbers and successful young academics, to the party of university students we were sharing the hut with.

The next day we went to Fox Promontory, a slightly obscure granite headland on the south side of the West Penwith peninsula. We scrambled down above the March seas to a sloping platform, maybe 20 feet above the level of the sea. But freak waves do exist; I remember seeing a wall of water coming towards me, then a huge weight knocking me down and dragging me downwards across the rough granite. José had been on a higher level than me, I felt him grab me as I came to a stop a few feet above the sea. We hastened to climb out, me soaking wet, nearly hypothermic by the time we got to the top of the route, with the whole of the front of my body grazed and bloody, feeling like I had been dragged across a cheese-grater.

At some point in my 30s I realised I didn’t any more have the bottle to do big serious sea-cliff routes any more. One memorable day out with Brian Davison probably confirmed this; he had his eye on an unclimbed sea-stack close to Fishguard – Needle Rock. But to get to it we had to get to the bottom of a 200 foot cliff, also unclimbed. We abseiled as far as a 150 rope would take us. We had to descend the last 50 ft using the ropes we were going to climb with, so when we got to the gap between the cliff and the needle we had to pull them down after us. Now we had to get up the sea-stack and down again before the route back to the main cliff was cut off by the tide, and then find a new route on-sight to get back up the mainland cliff.

In the end it was fine – Brian led a good route up the sea-stack, which he named “Needless to say”. And there was a relatively straightforward route up the main cliff to be found, at about VS in grade. Brian is a superbly strong and resourceful climber; there is no-one I would trust more to get out of a sticky situation, and there really was nothing to worry about, but I could feel myself losing my cool and succumbing to anxiety and fear.

I think those routes were pretty much the last serious, extreme routes I’ve done on sea-cliffs. But sea-cliff climbing doesn’t always have to be like that. There is still joy to be had in gentle routes above quiet seas. And there is no better example of that than the route I started this piece with, Red Wall at Porthclais, still one of my favourite routes anywhere.

The gentler side of sea-cliff climbing. The author on his umpteenth ascent of Red Wall, Porthclais, near St David’s; this picture gives a much more accurate sense of the character of the route than the cover picture of the Mortlock guide!

How much can artificial intelligence and machine learning accelerate polymer science?

I’ve been at the annual High Polymer Research Group meeting at Pott Shrigley this week; this year it had the very timely theme “Polymers in the age of data”. Some great talks have really brought home to me both the promise of machine learning and laboratory automation in polymer science, as well as some of the practical barriers. Given the general interest in accelerated materials discovery using artificial intelligence, it’s interesting to focus on this specific class of materials to get a sense of the promise – and the pitfalls – of these techniques.

Debra Audis, from the USA’s National Institute of Standards and Technology, started the meeting off with a great talk on how to use machine learning to make predictions of polymer properties given information about molecular structure. She described three difficulties for machine learning – availability of enough reliable data, the problem of extrapolation outside the parameter space of the training set, and the problem of explainability.

A striking feature of Debra’s talk for me was its exploration of the interaction between old-fashioned theory, and new-fangled machine learning (ML). This goes in two directions – on the one hand, Debra demonstrated that incorporating knowledge from theory can greatly speed up the training of a ML model, as well as improving its ability to extrapolate beyond the training set. But given a trained ML model – essentially a black box of weights for your neural network, Debra emphasised the value of symbolic regression to convert the black box to a closed form expression of simple functional forms of the kind a theorist would hope to be able to derive from some physical principles, providing something a scientist might recognise as an explanation of the regularities that the machine learning model encapsulates.

But any machine learning model needs data – lots of data – so where does that data come from? One answer is to look at the records of experiments done in the past – the huge corpus of experimental data contained within the scientific literature. Jacqui Cole from Cambridge has developed software to extract numerical data, chemical reaction schemes, and to analyse images from the scientific data. For specific classes of (non-polymeric) materials she’s been able to create data sets with thousands of entries, using automated natural language processing to extract some of the contextual information that makes the data useful. Jacqui conceded that polymeric materials are particularly challenging for this approach; they have complex properties that are difficult to pin down to a single number, and what to the outsider may seem to be a single material (polyethylene for example) may actually be a category that encompasses molecules with a wider variety of subtle variations arising from different synthesis methods and reaction conditions. And Debra and Jacqui shared some sighs of exasperation at the horribly inconsistent naming conventions used by polymer science researchers.

My suspicion on this (informed a little by the outcomes of a large scale collaboration with a multinational materials company that I’ve been part of over the last five years) is that the limitations of existing data sets mean that the full potential of machine learning will only be unlocked by the production of new, large scale datasets designed specifically for the problem in hand. For most functional materials the parameter space to be explored is vast and multidimensional, so considerable thought needs to be given to how best to sample this parameter space to provide the training data that a good machine learning model needs. In some circumstances theory can help here – Kim Jelfs from Imperial described an approach where the outputs from very sophisticated, compute intensive theoretical models were used to train a ML model that could then interpolate properties at much lower compute cost. But we will always need to connect to the physical world and make some stuff.

This means we will need automated chemical synthesis – the ability to synthesise many different materials with systematic variation of the reactants and reaction conditions, and then rapidly determine the properties of this library of materials. How do you automate a synthetic chemistry lab? Currently, a synthesis laboratory consists of a human measuring out materials, setting up the right reaction conditions, then analysing and purifying the products, finally determining their properties. There’s a fundamental choice here – you can automate the glassware, or automate the researcher. In the UK, Lee Cronin at Glasgow (not at the meeting) has been a pioneer of the former approach, while Andy Cooper at Liverpool has championed the latter. Andy’s approach involves using commercial industrial robots to carry out the tasks a human researcher would do, while using minimally adapted synthesis and analytical equipment. His argument in favour of this approach is essentially an economic one – the world market for general purpose industrial robots is huge, leading to substantial falls in price, while custom built automated chemistry labs represent a smaller market, so one should expect slower progress and higher prices.

Some aspects of automating the equipment are already commercially available. Automatic liquid handling systems are widely available, allowing one, for example to pipette reactants into multiwell plates, so if one’s synthesis isn’t sensitive to air one can use this approach to do combinatorial chemistry. Adam Gormley from Rutgers described this approach for making a library of copolymers by an oxygen-tolerant adaptation of reversible addition−fragmentation chain-transfer polymerisation (RAFT), to produce libraries of copolymers with varying polymer molecular weight and composition. Another approach uses flow chemistry, in which reactions take place not in a fixed piece of glassware, but as the solvents containing the reactants travel down pipes, as described by Tanja Junkers from Monash, and Nick Warren from Leeds. This approach allows in-line reaction monitoring, so it’s possible to build in a feedback loop, adjusting the ingredients and reaction conditions on the fly in response to what is being produced.

It seems to me, as a non-chemist, that there is still a lot of specific work to be done to adapt the automation approach to any particular synthetic method, so we are still some way from a universal synthesis machine. Andy Cooper’s talk title perhaps alluded to this: “The mobile robotic polymer chemist: nice, but does it do RAFT?” This may be a chemist’s joke.

But whatever approach one has realised to be able to produce a library of molecules with different characteristics, and analyse their properties, there remains the question of how to sample what is likely to be a huge parameter space in order to provide the most effective training set for machine learning. We were reminded by the odd heckle from a very distinguished industrial scientist in the audience that there is a very classical body of theory to underpin this kind of experimental strategy – the Design of Experiments methodology. In these approaches, one selects the optimum set of different parameters in order most effectively to span parameter space.

But an automated laboratory offers the possibility of adapting the sampling strategy in response to the results as one gets them. Kim Jelfs set out the possible approaches very clearly. You can take the brute force approach, and just calculate everything – but this is usually prohibitively expensive in compute. You can use an evolutionary algorithm, using mutation and crossover steps to find a way through parameter space that optimises the output. Bayesian optimisation is popular, and generative models can be useful for taking a few more random leaps. Whatever the details, there needs to be a balance between optimisation and exploration – between taking a good formulation and making it better, and searching widely across parameter space for a possibly unexpected set of conditions that provides a step-change in the properties one is looking for.

It’s this combination of automated chemical synthesis and analysis, with algorithms for directing a search through parameter space, that some people call a “self-driving lab”. I think the progress we’re seeing now suggests that this isn’t an unrealistic aspiration. My somewhat tentative conclusions from all this:

  • We’re still a long way from an automated lab that can flexibly handle many different types of chemistry, so for a while its going to be a question of designing specific set-ups for particular synthetic problems (though of course there will be a lot of transferrable learning).
  • There is still lot of craft in designing algorithms to search parameter space effectively.
  • Theory still has its uses, both in accelerating the training of machine learning models, and in providing satisfactory explanations of their output.
  • It’s going to take significant effort, computing resource and money to develop these methods further, so it’s going to be important to select use cases where the value of an optimised molecule makes the investment worthwhile. Amongst the applications discussed in the meeting were drug excipients, membranes for gas separation, fuel cells and batteries, optoelectronic polymers.
  • Finally, the physical world matters – there’s value in the existing scientific literature, but it’s not going to be enough just to process words and text; for artificial intelligence to fulfil its promise for accelerating materials discovery you need to make stuff and test its properties.

Implications of Rachel Reeves’s Mais Lecture for Science & Innovation Policy

There will be a general election in the UK this year, and it is not impossible (to say the least) that the Labour opposition will form the next government. What might such a government’s policies imply for science and innovation policy? There are some important clues in a recent, lengthy speech – the 2024 Mais Lecture – given by the Shadow Chancellor of the Exchequer, Rachel Reeves, in which she sets out her economic priors.

In the speech, Reeves sets out in her view, the underlying problems of the UK economy – slow productivity growth leading to wage stagnation, low investment levels, poor skills (especially intermediate and technical) and “vast regional disparities, with all of England’s biggest cities outside London having productivity levels below the national average”. I think this analysis is now approaching being a consensus view – see, for example, this recent publication – The Productivity Agenda – from The Productivity Institute.

Interestingly, Reeves resists the temptation to blame everything on the current government, stressing that this situation reflects long-standing weaknesses, which began in the early 1990’s, which were not sufficiently challenged by the Labour governments of the late 90’s and 00’s, and then were made much worse in the 2010’s by Austerity, Brexit, and post-pandemic policy instability. Singling out Conservative Chancellor of the Exchequer Nigel Lawson as the author of policies that were both wrong in principle and badly executed, she identifies this period as the root of “an unprecedented surge in inequality between places and people which endures today. The decline or disappearance of whole industries, leaving enduring social and economic costs and hollowing out our industrial strength. And – crucially – diminishing returns for growth and productivity.”

To add to our problems, Reeves stresses that the external environment the UK now faces is much more challenging than in previous decades, with geopolitical instability reviving the basic question of national security, uncertainties from new technologies like AI, and the challenges of climate instability and the net zero energy transition. She is blunt in saying “globalisation, as we once knew it, is dead”“a growth model reliant on geopolitical stability is a growth model resting on increasingly shallow foundations.”

What comes next? For Reeves, the new questions are “how Britain can pay its way in the world; of our productive capacity; of how to drive innovation and diffusion throughout our economy; of the regional distribution of work and opportunity; of how to mobilise investment, develop skills and tackle inefficiencies to modernise a sclerotic economy; and of energy security”, and the answers are to be found what economist Dani Rodrik calls “productivism”.

In practise, this means an industrial strategy which, recognising the limits of central government’s information and capacity to act, works in partnership. This needs to have both a sector focus – building on the UK’s existing areas of comparative advantage and its strategic needs – and a regional focus, working with local and regional government to support the development of clusters and the realisation of agglomeration benefits.

In terms of the mechanics of the approach, Reeves anticipates that this central mission of government – restoring economic growth – will be driven from the Treasury, through a a beefed up “Enterprise and Growth” unit. To realise these ambitions, she identifies three areas of focus – recreating macroeconomic stability, investment – particularly in partnership with the private sector, and reform – of the planning system, housing, skills, the labour market and regional governance.

Innovation is a central part of Reeves’s vision for increased investment, partly through the familiar call for more capital to flow to university spin-outs. But there is also a call for more focus on the diffusion of new technologies across the whole economy, including what Reeves has long called the “everyday economy”. In my view, this is correct, but will need new institutions, or the adaptation of existing ones (as I argued, with Eoin O’Sullivan: “What’s missing in the UK’s R&D landscape – institutions to build innovation capacity”). There is a very sensible commitment to a ten year funding cycle for R&D institutions, essential not least because some confidence in the longevity of programmes is essential to give the private sector the confidence to co-invest.

This was quite a dense speech, and the commentary around it – including the pre-briefing from Labour – was particularly misleading. I think it would be a mistake to underestimate how much of a break it represents from the conventional economic wisdom of the past three decades, though the details of the policy programme remain to be filled in, and, as many have commented, its implementation in a very tough fiscal environment is going to be challenging. Our current R&D landscape isn’t ideally configured to support these aspirations and the UK’s current challenges (as I argue in my long piece “Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape”); I’d anticipate some reshaping to support the “missions” that are intended to give some structure to the Labour programme. And, as Reeves says unequivocally, of these missions, the goal of restoring productivity and economic growth is foundational.

Optical fibres and the paradox of innovation

Here is one of the foundational papers for the modern world – in effect, reporting the invention of optical fibres. Without optical fibres, there would be no internet, no on-demand video – and no globalisation, in the form we know it, with the highly dispersed supply chains that cheap and reliable information transmission between nations and continents that optical fibres make possible. This won a Nobel Prize for Charles Kao, a HK Chinese scientist then working in STL in Essex, a now defunct corporate laboratory.

Optical fibres are made of glass – so, ultimately, they come from sand – as Ed Conway’s excellent recent book, “Material World” explains. To make optical fibres a practical proposition needed lots of materials science to make glass pure enough to be transparent over huge distances. Much of this was done by Corning in the USA.

Who benefitted from optical fibres? The value of optical fibres to the world economy isn’t fully captured by their monetary value. Like all manufactured goods, productivity gains have driven their price down to almost negligible levels.

At the moment, the whole world is being wired with optical fibres, connecting people, offices, factories to superfast broadband. Yet, the the world trade in optical fibres is worth just $11 bn, less than 0.05% of total world trade. This is characteristic of that most misunderstood phenomenon in economics, Baumol’s so-called “cost disease”.

New inventions successively transform the economy, while innovation makes their price fall so far that, ultimately, in money terms they are barely detectable in GDP figures. Nonetheless,society benefits from innovations, taken for granted through ubiquity & low cost. (An earlier blog post of mine illustrates how Baumol’s “cost disease” works through a toy model)

To have continued economic growth, we need to have repeated cycles of invention & innovation like this. 30 years ago, corporate labs like STL were the driving force behind innovations like these. What happened to them?

Standard Telecommunication Laboratories in Harlow was the corporate lab of STC, Standard Telephones and Cables, a subsidiary of ITT, with a long history of innovation in electronics, telephony, radio coms & TV broadcasting in the UK. After a brief period of independence from 1982, STC was bought by Nortel, Canadian descendent of the North American Bell System. Nortel needed a massive restructuring after late 90’s internet bubble, & went bankrupt in 2009. The STL labs were demolished & are now a business park

The demise of Standard Communication Laboratories just one example of the slow death of UK corporate laboratories through the 90’s & 00’s, driven by changing norms in corporate governance and growing short-termism. These were well described in the 2012 Kay review of UK Equity Markets and Long-Term Decision Making. This has led, in my opinion, to a huge weakening of the UK’s innovation capacity, whose economic effects are now becoming apparent.

Deep decarbonisation is still a huge challenge

In 2019 I wrote a blogpost called The challenge of deep decarbonisation, stressing the scale of the economic and technological transition implied by a transition to net zero by 2050. I think the piece bears re-reading, but I wanted to update the numbers to see how much progress we had made in 4 years (the piece used the statistics for 2018; the most up-to-date current figures are for 2022). Of course, in the intervening four years we have had a pandemic and global energy price spike.

The headline figure is that the fossil fuel share of our primary consumption has fallen, but not by much. In 2018, 79.8% of our energy came from oil, gas and coal. In 2022, this share was 77.8%.

There is good news – if we look solely at electrical power generation, generation from hydro, wind and solar was up 32% 2018-2022, from 75 TWh to 99 TWh. Now 30.5% of our electricity production comes from renewables (excluding biomass, which I will come to later).

The less good news is that electrical power generation from nuclear is down 27%, from 65 TWh to 48 TWh, and this now represents just 14.7% of our electricity production. The increase in wind & solar is a real achievement – but it is largely offset by the decline in nuclear power production. This is the entirely predictable result of the AGR fleet reaching the end of its life, and the slow-motion debacle of the new nuclear build program.

The UK had 5.9 GW of nominal nuclear generation capacity in 2022. Of this, all but Sizewell B (1.2 GW) will close by 2030. In the early 2010’s, 17 GW of new nuclear capacity was planned – with the potential to produce more than 140 TWh per year. But, of these ambitious plans, the only project that is currently proceeding is Hinkley Point, late and over budget. The best we can hope for is that in 2030 we’ll have Hinkley’s 3.2 GW, which together with Sizewell B’s continuing operation could produce at best 38 TWh a year.

In 2022, another 36 TWh of electrical power – 11% – came from thermal renewables – largely burning imported wood chips. This supports a claim that more than half (56%) of our electricity is currently low carbon. It’s not clear, though, that imported biomass is truly sustainable or scaleable.

It’s easy to focus on electrical power generation. But – and this can’t be stressed too much – most of the energy we use is in the form of directly burnt gas (to heat our homes) and oil (to propel our cars and lorries).

The total primary energy we used in 2022 was 2055 TWh; and of this 1600 TWh was oil, gas and coal. 280 TWh (mostly gas) was converted into electricity (to produce 133 TWh of electricity), and 60 TWh’s worth of fossil fuel (mostly oil) was diverted into non-energy uses – mostly feedstocks for the petrochemical industry – leaving 1260 TWh to be directly burnt.

To achieve our net-zero target, we need to stop burning gas and oil, and instead use electricity. This implies a considerable increase in the amount of electricity we generate – and this increase all needs to come from low-carbon sources. There is good news, though – thanks to the second law of thermodynamics, we can convert electricity more efficiently into useful work than we can by burning fuels. So the increase in electrical generation capacity in principle can be a lot less than this 1260 TWh per year.

Projecting energy demand into the future is uncertain. On the one hand, we can rely on continuing improvements in energy efficiency from incremental technological advances; on the other, new demands on electrical power are likely to emerge (the huge energy hunger of the data centres needed to implement artificial intelligence being one example). To illustrate the scale of the problem, let’s consider the orders of magnitude involved in converting the current major uses of directly burnt fossil fuels to electrical power.

In 2022, 554 TWh of oil were used, in the form of petrol and diesel, to propel our cars and lorries. We do use some electricity directly for transport – currently just 8.4 TWh. A little of this is for trains (and, of course, we should long ago have electrified all intercity and suburban lines), but the biggest growth is for battery electrical vehicles. Internal combustion engines are heat engines, whose efficiency is limited by Carnot, whereas electric motors can in principle convert all inputted electrical energy into useful work. Very roughly, to replace the energy demands of current cars and lorries with electric vehicles would need another 165 TWh/year of electrical power.

The other major application of directly burnt fossil fuels is for heating houses and offices. This used 334 TWh/year in 2022, mostly in the form of natural gas. It’s increasingly clear that the most effective way of decarbonising this sector is through the installation of heat pumps. A heat pump is essentially a refrigerator run backwards, cooling the outside air or ground, and heating up the interior. Here the second law of thermodynamics is on our side; one ends up with more heat out than energy put in, because rather than directly converting electricity into heat, one is using it to move heat from one place to another.

Using a reasonable guess for the attainable, seasonally adjusted “coefficient of performance” for heat pumps, one might be able to achieve the same heating effect as we currently get from gas boilers with another 100 TWh of low carbon electricity. This figure could be substantially reduced if we had a serious programme of insulating old houses and commercial buildings, and were serious about imposing modern energy efficiency standards for new ones.

So, as an order of magnitude, we probably need to roughly double our current electricity generation capacity from its current value of 320 TWh/year, to more than 600 TWh/year. This will take big increases in generation from wind and solar, currently running around 100 TWh/year. In addition to intermittent renewables, we need a significant fraction of firm power, which can always be relied on, whatever the state of wind and sunshine. Nuclear would be my favoured source for this, so that would need a big increase from the 40 TWh/year we’ll have in place by 2030. The alternative would be to continue to generate electricity from gas, but to capture and store the carbon dioxide produce. For why I think this is less desirable for power generation (though possibly necessary for some industrial processes), see my earlier piece: Carbon Capture and Storage: technically possible, but politically and economically a bad idea.

Industrial uses of energy, which currently amount to 266 TWh, are a mix of gas, electricity and some oil. Some of these applications (e.g. making cement and fertiliser) are going to be rather hard to electrify, so, in addition to requiring carbon capture and storage, this may provide a demand for hydrogen, produced from renewable electricity, or conceivably process heat from high temperature nuclear reactors.

It’s also important to remember that a true reckoning of our national contribution to climate change would include taking account of the carbon dioxide produced in the goods and commodities we import, and our share of air travel. This is very significant, though hard to quantify – in my 2019 piece, I estimated that this could add as much as 60% to our personal carbon budget.

To conclude, we know what we have to do:

  • Electrify everything we can (heat pumps for houses, electric cars), and reduce demand where possible (especially by insulating houses and offices);
  • Use green hydrogen for energy intensive industry & hard to electrify sectors;
  • Hugely increase zero carbon electrical generation, through a mix of wind, solar and nuclear.

In each case, we’re going to need innovation, focused on reducing cost and increasing scale.

There’s a long way to go!

All figures are taken from the UK Government’s Digest of UK Energy Statistics, with some simplification and rounding.

The shifting sands of UK Government technology prioritisation

In the last decade, the UK has had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle. This 3500 word piece looks at this history of instability in UK innovation policy, and suggests some principles of consistency and clarity which might give us some more stability in the decade to come. A PDF version can be downloaded here.


The problem of policy churn has been identified in a number of policy areas as a barrier to productivity growth in the UK, and science and innovation policy is no exception to this. The UK can’t do everything – it represents less than 3% of the world’s R&D resources, so it needs to specialise. But recent governments have not found it easy to decide where the UK should put its focus, and then stick to those decisions.

In 2012 this the then Science Minister, David Willetts, launched an initiative which identified 8 priority technologies – the “Eight Great Technologies”. Willetts reflected on the fate of this initiative in a very interesting paper published last year. In short, while there has been consensus on the need for the UK to focus (with the exception of one short period), the areas of focus have been subject to frequent change.

Substantial changes in direction for technology policy have occurred despite the fact that we’ve had a single political party in power since 2010, with particular instability since 2015, in the period of Conservative majority government. Since 2012, the average life-span of an innovation policy has been about 2.5 years. Underneath the headline changes, it is true that there have been some continuities. But given the long time-scales needed to establish research programmes and to carry them through to their outcomes, this instability makes it different to implement any kind of coherent strategy.

Shifting Priorities: from “Eight Great Technologies”, through “Seven Technology Families”, to “Five Critical Technologies”

Table 1 summarises the various priority technologies identified in government policy since 2012, grouped in a way which best brings out the continuities (click to enlarge).

The “Eight Great Technologies” were introduced in 2012 in a speech to the Royal Society by the then Chancellor of the Exchequer, George Osborne; a paper by David Willetts expanded on the rationale for the choices . The 2014 Science and Innovation Policy endorsed the “Eight Great Technologies”, with the addition of quantum technology, which, following an extensive lobbying exercise, had been added to the list in the 2013 Autumn Statement.

2015 brought a majority Conservative government, but continuity in the offices of Prime Minister and Chancellor of the Exchequer didn’t translate into continuity in innovation policy. The new Secretary of State in the Department of Business, Innovation and Skills was Sajid Javid, who brought to the post a Thatcherite distrust of anything that smacked of industrial strategy. The main victim of this world-view was the innovation agency Innovate UK, which was subjected to significant cut-backs, causing lasting damage.

This interlude didn’t last very long – after the Brexit referendum, David Cameron’s resignation and the premiership of Theresa May, there was an increased appetite for intervention in the economy, coupled with a growing consciousness and acknowledgement of the UK’s productivity problem. Greg Clark (a former Science Minister) took over at a renamed and expanded Department of Business, Energy and Industrial Strategy.

A White Paper outlining a “modern industrial strategy” was published in 2017. Although it nodded to the “Eight Great Technologies”, the focus shifted to four “missions”. Money had already been set aside in the 2016 Autumn Statement for an “Industrial Strategy Challenge Fund” which would support R&D in support of the priorities that emerged from the Industrial Strategy.

2019 saw another change of Prime Minister – and another election, which brought another Conservative government, with a much greater majority, and a rather interventionist manifesto that promised substantial increases in science funding, including a new agency modelled on the USA’s ARPA, and a promise to “focus our efforts on areas where the UK can generate a commanding lead in the industries of the future – life sciences, clean energy, space, design, computing, robotics and artificial intelligence.”

But the “modern industrial strategy” didn’t survive long into the new administration. The new Secretary of State was Kwasi Kwarteng, from the wing of the party with an ideological aversion to industrial strategy. In 2021, the industrial strategy was superseded by a Treasury document, the Plan for Growth, which, while placing strong emphasis on the importance of innovation, took a much more sector and technology agnostic approach to its support. The Plan for Growth was supported by a new Innovation Strategy, published later in 2021. This did identify a new set of priority technologies – “Seven Technology Families”.

2022 was the year of three Prime Ministers. Liz Truss’s hard-line free market position was certainly unfriendly to the concept of industrial strategy, but in her 44 day tenure as Prime Minister there was not enough time to make any significant changes in direction to innovation policy.

Rishi Sunak’s Premiership brought another significant development, in the form of a machinery of government change reflecting the new Prime Minister’s enthusiasm for technology. A new department – the Department for Innovation, Science and Technology – meant that there was now a cabinet level Secretary of State focused on science. Another significant evolution in the profile of science and technology in government was the increasing prominence of national security as a driver of science policy.

This had begun in the 2021 Integrated Review , which was an attempt to set a single vision for the UK’s place in the world, covering security, defence, development and foreign policy. This elevated “Sustaining strategic advantage through science and technology” as one of four overarching principles. The disruptions to international supply chains during the covid pandemic, and the 2022 invasion of Ukraine by Russia and the subsequent large scale European land war, raised the issue of national security even higher up the political agenda.

A new department, and a modified set of priorities, produced a new 2023 strategy – the Science & Technology Framework – taking a systems approach to UK science & technology . This included a new set of technology priorities – the “Five critical technologies”.

Thus in a single decade, we’ve had four significantly different sets of technology priorities, and a short, but disruptive, period, where such prioritisation was opposed on principle.

Continuities and discontinuities

There are some continuities in substance in these technology priorities. Quantum technology appeared around 2013 as an addendum to the “Eight Great Technologies”, and survives into the current “Five Critical Technologies”. Issues of national security are a big driver here, as they are for much larger scale programmes in the USA and China.

In a couple of other areas, name changes conceal substantial continuity. What was called synthetic biology in 2012 is now encompassed in the field of engineering biology. Artificial Intelligence has come to high public prominence today, but it is a natural evolution of what used to be called big data, driven by technical advances in machine learning, more computer power, and bigger data sets.

Priorities in 2017 were defined as Grand Challenges, not Technologies. The language of challenges is taken up in the 2021 Innovation Strategy, which proposes a suite of Innovation Missions, distinct from the priority technology families, to address major societal challenges, in areas such as climate change, public health, and intractable diseases. The 2023 Science and Technology Framework, however, describes investments in three of the Five Critical Technologies, engineering biology, artificial intelligence, and quantum technologies, as “technology missions”, which seems to use the term in a somewhat different sense. There is room for more clarity about what is meant by a grand challenge, a mission, or a technology, which I will return to below.

Another distinction that is not always clear is between technologies and industry sectors. Both the Coalition and the May governments had industrial strategies that explicitly singled out particular sectors for support, including through support for innovation. These are listed in table 2. But it is arguable that at least two of the Eight Great Technologies – agritech, and space & satellites – would be better thought of as industry sectors rather than technologies.

Table 2 – industrial strategy sectors, as defined by the Coalition, and the May government.

The sector approach did underpin major applied public/private R&D programmes (such as the Aerospace Technology Institute, and the Advanced Propulsion Centre), and new R&D institutions, such as the Offshore Renewable Catapult Centre, designed to support specific industry sectors. Meanwhile, under the banner of Life Sciences, there is continued explicit support from the pharmaceutical and biotech industry, though here there is a lack of clarity about whether the primary goal is to promote the health of citizens through innovation support to the health and social care system, or to support pharma and biotech as high value, exporting, industrial sectors.

But two of the 2023 “five critical technologies” – semiconductors and future telecoms – are substantially new. Again, these look more like industrial sectors than technologies, and while no one can doubt their strategic importance in the global economy it isn’t obvious that the UK has a particularly strong comparative advantage in them, either in the size of the existing business base or the scale of the UK market (see my earlier discussion of the background to a UK Semiconductor Strategy).

The story of the last ten years, then, is a lack of consistency, not just in the priorities themselves, but in the conceptual basis for making the prioritisation – whether challenges or missions, industry sectors, or technologies.

From strategy to implementation

How does one turn from strategy to implementation: given a set of priority sectors, what needs to happen to turn these into research programmes, and then translate that research into commercial outcomes? An obvious point that nonetheless needs stressing, is that this process has long lead times, and this isn’t compatible with innovation strategies that have an average lifetime of 2.5 years.

To quote the recent Willetts review of the business case process for scientific programmes: “One senior official estimated the time from an original idea, arising in Research Councils, to execution of a programme at over two and a half years with 13 specific approvals required.” It would obviously be desirable to cut some of the bureaucracy that causes such delays, but it is striking that the time taken to design and initiate a research programme is of the same order as the average lifetime of an innovation strategy.

One data point here is the fate of the Industrial Strategy Challenge Fund. This was announced in the 2016 Autumn Statement, anticipating the 2017 Industrial Strategy White Paper, and exists to support translational research programmes in support of that Industrial Strategy. As we have seen, this strategy was de-emphasised in 2019, and formally scrapped in 2021. Yet the research programmes set up to support it are still going, with money still in the budget to be spent in FY 24/25.

Of course, much worthwhile research will be being done in these programmes, so the money isn’t wasted; the problem is that such orphan programmes may not have any follow-up, as new programmes on different topics are designed to support the latest strategy to emerge from central government.

Sometimes the timescales are such that there isn’t even a chance to operationalise one strategy before another one arrives. The major public funder of R&D, UKRI, produced a five year strategy in March 2022 , which was underpinned by the seven technology families. To operationalise this strategy, UKRI’s constituent research councils produced a set of delivery plans . These were published in September 2022, giving them a run of six months before the arrival of the 2023 Science and Innovation Framework, with its new set of critical technologies.

A natural response of funding agencies to this instability would be to decide themselves what best to do, and then do their best to retro-fit their ongoing programmes to new government strategies as they emerge. But this would defeat the point of making a strategy in the first place.

The next ten years

How can we do better over the next decade? We need to focus on consistency and clarity.

Consistency means having one strategy that we stick to. If we have this, investors can have confidence in the UK, research institutions can make informed decisions about their own investments, and individual researchers can plan their careers with more confidence.

Of course, the strategy should evolve, as unexpected developments in science and technology appear, and as the external environment changes. And it should build on what has gone before – for example, there is much of value in the systems approach of the 2023 Science and Innovation Framework.

There should be clarity on the basis for prioritisation. I think it is important to be much clearer about what we mean by Grand Challenges, Missions, Industry Sectors, and Technologies, and how they differ from each other. With sharper definitions, we might find it easier to establish clear criteria for prioritisation.

For me, Grand Challenges establish the conditions we are operating under. Some grand challenges might include:

  • How to move our energy economy to a zero-carbon basis by 2050;
  • How to create an affordable and humane health and social care system for an ageing population;
  • How to restore productivity growth to the UK economy and reduce the UK’s regional disparities in economic performance;
  • How to keep the UK safe and secure in an increasingly unstable and hostile world.

One would hope that there was a wide consensus about the scale of these problems, though not everyone will agree, nor will it always be obvious, what the best way of tackling them is.

Some might refer to these overarching issues as missions, using the term popularised by Mariana Mazzacuto , but I would prefer to refer to a mission as something more specific, with a sense of timescale and a definite target. The 1960’s Moonshot programme is often taken as an exemplar, though I think the more significant mission from that period was to create the ability for the USA to land a half tonne payload anywhere on the earth’s surface, with an accuracy of a few hundred meters or better.

The crucial feature of a mission, then, is that it is a targeted program to achieve a strategic goal of the state, that requires both the integration and refinement of existing technologies and the development of new ones. Defining and prioritising missions requires working across the whole of government, to identify the problems that the state needs to be solved, and that are tractable enough given reasonable technology foresight to be worth trying, and prioritising them.

The key questions for a judging missions, then, are, how much does the government want this to happen, how feasible is it given foreseeable technology, how well equipped is the UK to deliver it given its industrial and research capabilities, and how affordable is it?

For supporting an industry sector, though, the questions are different. Sector support is part of an active industrial strategy, and given the tendency of industry sectors to cluster in space, this has a strong regional dimension. The goals of industrial strategy are largely economic – to raise the economic productivity of a region or the nation – so the criteria for selecting sectors should be based on their importance to the economy in terms of the fraction of GVA that they supply, and their potential to improve productivity.

In the past industrial strategy has often been driven by the need to create jobs, but our current problem is productivity, rather than unemployment, so I think the key criteria for selecting sectors should be their potential to create more value through the application of innovation and the development of skills in their workforces.

In addition to the economic dimension, there may also be a security aspect to the choice, if there is a reason to suppose that maintaining capability in a particular sector is vital to national security. The 2021 nationalisation of the steel forging company, Sheffield Forgemasters, to secure the capability to manufacture critical components for the Royal Navy’s submarine fleet, would have been unthinkable a decade ago.

Industrial strategy may involve support for innovation, for example through collaborative programmes of pre-competitive research. But it needs to be broader than just research and development; it may involve developing institutions and programmes for innovation diffusion, the harnessing of public procurement, the development of specialist skills provision, and at a regional level, the provision of infrastructure.

Finally, on what basis should we choose a technology to focus on? By a technology priority, we refer to an emerging capability arising from new science, that could be adopted by existing industry sectors, or could create new, disruptive sectors. Here an understanding of the international research landscape, and the UK’s part of that, is a crucial starting point. Even the newest technology, to be implemented, depends on existing industrial capability, so the shape of the existing UK industrial base does need to be taken account. Finally, one shouldn’t underplay the importance of the vision of talented and driven individuals.

This isn’t to say that priorities for the whole of the science and innovation landscape need to be defined in terms of challenges, missions, and industry sectors.
A general framework for skills, finance, regulation, international collaboration, and infrastructure – as set out by the recent Science & Innovation Framework – needs to underlie more specific prioritisation. Maintaining the health of the basic disciplines is important to provide resilience in the face of the unanticipated, and it is important to be open to new developments and maintain agility in responding to them.

The starting point for a science and innovation strategy should be to realise that, very often, science and innovation shouldn’t be the starting point. Science policy is not the same as industrial strategy, even though it’s often used as a (much cheaper) substitute for it. For challenges and missions, defining the goals must come first; only then can one decide what advances in science and technology are needed to bring those in reach. Likewise, in a successful industrial strategy, close engagement with the existing capabilities of industry and the demands of the market are needed to define the areas of science and innovation that will support the development of a particular industry sector.

As I stressed in my earlier, comprehensive, survey of the UK Research and Development landscape, underlying any lasting strategy needs to be a settled, long-term view of what kind of country the UK aspires to be, what kind of economy it should have, and how it sees its place in the world.

Science and Innovation in the 2023 Autumn Statement

On the 22nd November, the Government published its Autumn Statement. This piece, published in Research Professional under the title Economic clouds cast gloom over the UK’s ambitions for R&D, offers my somewhat gloomy perspective on the implications of the statement for science and innovation.

This government has always placed a strong rhetorical emphasis on the centrality of science and innovation in its plans for the nation, though with three different Prime Ministers, there’ve been some changes in emphasis.

This continues in the Autumn Statement: a whole section is devoted to “Supporting the UK’s scientists and innovators”, building on the March 2023 publication of a “UK Science and Technology Framework”, which recommitted to increasing total public spending on research to £20 billion in FY 2024/25. But before going into detail on the new science-related announcements in the Autumn Statement, let’s step back to look at the wider economic context in which innovation strategy is being made.

There are two giant clouds in the economic backdrop the Autumn Statement. One is inflation; the other is economic growth – or, to be more precise, the lack of it.

Inflation, in some senses, is good for governments. It allows them to raise taxes without the need for embarrassing announcements, as people’s cost-of-living wage rises take them into higher tax brackets. And by simply failing to raise budgets in line with inflation, public spending cuts can be imposed by default. But if it’s good for governments, it’s bad for politicians, because people notice rising prices, and they don’t like it. And the real effect of stealth public spending cuts do, nonetheless, materialise.

The effect of the inflation we’ve seen since 2021 is a rise in price levels of around 20%; while the inflation rate peak has surely passed, prices will continue to rise. We can already see the effect on the science budget. Back in 2021, the Comprehensive Spending Review announced a significant increase in the overall government research budget, from £15 billion to £20 billion in 24/25. By next year, though, the effect of inflation will have been to erode that increase in real terms, from £5 billion to less than £2 billion in 2021 money. The effect on Core Research is even more dramatic; in effect inflation will have almost totally wiped out the increase promised in 2021.

Our other problem is persistent slow economic growth, as I discussed here. The underlying cause of this is the dramatic decrease in productivity growth since the financial crisis of 2008. The consequence is the prospect of two full decades without any real growth in wages, and, for the government, the need to simultaneously increase the tax burden and squeeze public services in an attempt to stabilise public debt.

The detailed causes of the productivity slowdown are much debated, but the root of it seems to be the UK’s persistent lack of investment, both public and private (see The Productivity Agenda for a broad discussion). Relatively low levels of R&D are part of this. The most significant policy change in the Autumn Statement does recognise this – it is a tax break allowing companies to set the full cost of new plant and machinery against corporation tax. On the government side, though, the plans are essentially for overall flat capital spending – i.e., taking into account inflation, a real terms cut. Government R&D spending falls in this overall envelope, so is likely to be under pressure.

Instead, the government is putting their hopes on the private sector stepping up to fill the gap, with a continuing emphasis on measures such as R&D tax credits to incentivise private sector R&D, and reforms to the pension system – including the “Long-term Investment for Technology and Science (LIFTS)” initiative – to bring more private money into the research system. The ambition for the UK to be a “Science Superpower” remains, but the government would prefer not to have to pay for it.

One significant set of announcements – on the “Advanced Manufacturing Plan” – marks the next phase in the Conservatives’ off-again, on-again relationship with industrial strategy. Commitments to support advanced manufacturing sectors such as aerospace, automobiles and pharmaceuticals, as well as the “Made Smarter” programme for innovation diffusion, are very welcome. The sums themselves perhaps shouldn’t be taken too seriously; the current government can’t bind its successor, whatever its colour, and anyway this money will have to be found within the overall spending envelope produced by the next Comprehensive Spending Review. But it is very welcome that, after the split-up of the Department of Business, Energy and Industrial Strategy, that the successor Department of Business and International Trade still maintains an interest in research and innovation in support of mainstream business sectors, rather than assuming that is all now to be left to its sister Department of Science, Innovation and Technology.

For all the efforts to create a tax-cutting headline, the economic backdrop for this Autumn statement is truly grim. There is no rosy scenario for the research community to benefit from; the question we face instead is how to fulfil the promises we have been making that R&D can indeed lead to productivity growth and economic benefit.

Productivity and artificial intelligence

To scientists, machine learning is a relatively old technology. The last decade has seen considerable progress, both as a result of new techniques – back propagation & deep learning, and the transformers algorithm – and massive investment of private sector resources, especially computing power. The result has been the striking and hugely publicised success of large language models.

But this rapid progress poses a paradox – for all the technical advances over the last decade, the impact on productivity growth has been undetectable. The productivity stagnation that has been such a feature of the last decade and a half continues, with all the deleterious effects that produces in flat-lining living standards and challenging public finances. The situation is reminiscent of an earlier, 1987, comment by the economist Robert Solow: “You can see the computer age everywhere but in the productivity statistics.”

There are two possible resolutions of this new Solow paradox – one optimistic, one pessimistic. The pessimist’s view is that, in terms of innovation, the low-hanging fruit has already been taken. In this perspective – most famously stated by Robert Gordon – today’s innovations are actually less economically significant than innovations of previous eras. Compared to electricity, Fordist manufacturing systems, mass personal mobility, antibiotics, and telecoms, to give just a few examples, even artificial intelligence is only of second order significance.

To add further to the pessimism, there is a growing sense that the process of innovation itself is suffering from diminishing returns – in the words of a famous recent paper: “Are ideas getting harder to find?”.

The optimistic view, by contrast, is that the productivity gains will come, but they will take time. History tells us that economies need time to adapt to new general purpose technologies – infrastructures & business models need to be adapted, and the skills to use them need to be spread through the working population. This was the experience with the introduction of electricity to industrial processes – factories had been configured around the need to transmit mechanical power from central steam engines through elaborate systems of belts and pulleys to the individual machines, so it took time to introduce systems where each machine had its own electric motor, and the period of adaptation might even involve a temporary reduction in productivity. Hence, one might expect a new technology to follow a J-shaped curve.

Whether one is an optimist or a pessimist, there are a number of common research questions that the rise of artificial intelligence raises:

  • Are we measuring productivity right? How do we measure value in a world of fast moving technologies?
  • How do firms of different sizes adapt to new technologies like AI?
  • How important – and how rate-limiting – is the development of new business models in reaping the benefits of AI?
  • How do we drive productivity improvements in the public sector?
  • What will be the role of AI in health and social care?
  • How do national economies make system-wide transitions? When economies need to make simultaneous transitions – for example net zero and digitalisation – how do they interact?
  • What institutions are needed to support the faster and wider diffusion of new technologies like AI, & the development of the skills needed to implement them?
  • Given the UK’s economic imbalances, how can regional innovation systems be developed to increase absorptive capacity for new technologies like AI?

A finer-grained analysis of the origins of our productivity slowdown actually deepens the new Solow paradox. It turns out that the productivity slowdown has been most marked in the most tech-intensive sectors. In the UK, the most careful decomposition similarly finds that it’s the sectors normally thought of as most tech intensive that have contributed to the slowdown – transport equipment (i.e., automobiles and aerospace), pharmaceuticals, computer software and telecoms.

It’s worth looking in more detail at the case of pharmaceuticals to see how the promise of AI might play out. The decline in productivity of the pharmaceutical industry follows several decades in which, globally, the productivity of R&D – expressed as the number of new drugs brought to market per $billion of R&D – has been falling exponentially.

There’s no clearer signal of the promise of AI in the life sciences than the effective solution of one of the most important fundamental problems in biology – the protein folding problem – by Deepmind’s programme AlphaFold. Many proteins fold into a unique three dimensional structure, whose precise details determine its function – for example in catalysing chemical reactions. This three-dimensional structure is determined by the (one-dimensional) sequence of different amino acids along the protein chain. Given the sequence, can one predict the structure? This problem had resisted theoretical solution for decades, but AlphaFold, using deep learning to establish the correlations between sequence and many experimentally determined structures, can now predict unknown structures from sequence data with great accuracy and reliability.

Given this success in an important problem from biology, it’s natural to ask whether AI can be used to speed up the process of developing new drugs – and not surprising that this has prompted a rush of money from venture capitalists. One of the most high profile start-ups in the UK pursuing this is BenevolentAI, floated on the Amsterdam Euronext market in 2021 with €1.5 billion valuation.

Earlier this year, it was reported that BenevolentAI was laying off 180 staff after one of its drug candidates failed in phase 2 clinical trials. Its share price has plunged, and its market cap now stands at €90 million. I’ve no reason to think that BenevolentAI is anything but a well run company employing many excellent scientists, and I hope it recovers from these setbacks. But what lessons can be learnt from this disappointment? Given that AlphaFold was so successful, why has it been harder than expected to use AI to boost R&D productivity in the pharma industry?

Two factors made the success of AlphaFold possible. Firstly, the problem it was trying to solve was very well defined – given a certain linear sequence of amino acids, what is the three dimensional structure of the folded protein? Secondly, it had a huge corpus of well-curated public domain data to work on, in the form of experimentally determined protein structures, generated through decades of work in academia using x-ray diffraction and other techniques.

What’s been the problem in pharma? AI has been valuable in generating new drug candidates – for example, by identifying molecules that will fit into particular parts of a target protein molecule. But, according to pharma analyst Jack Scannell [1], it isn’t identifying candidate molecules that is the rate limiting step in drug development. Instead, the problem is the lack of screening techniques and disease models that have good predictive power.

The lesson here, then, is that AI is very good at the solving the problems that it is well adapted for – well posed problems, where there exist big and well-curated datasets that span the problem space. Its contribution to overall productivity growth, though, will depend on whether those AI-susceptible parts of the overall problem are in fact the rate-limiting steps.

So how is the situation changed by the massive impact of large language models? This new technology – “generative pre-trained transformers” – consists of text prediction models based on establishing statistical relationships between the words found in a massively multi-parameter regression over a very large corpus of text [3]. This has, in effect, automated the production of plausible, though derivative and not wholly reliable, prose.

Naturally, sectors for which this is the stock-in-trade feel threatened by this development. What’s absolutely clear is that this technology has essentially solved the problem of machine translation; it also raises some fascinating fundamental issues about the deep structure of language.

What areas of economic life will be most affected by large language models? It’s already clear that these tools can significantly speed up writing computer code. Any sector in which it is necessary to generate boiler-plate prose, in marketing, routine legal services, and management consultancy is likely to be affected. Similarly, the assimilation of large documents will be assisted by the capabilities of LLMs to provide synopses of complex texts.

What does the future hold? There is a very interesting discussion to be had, at the intersection of technology, biology and eschatology, about the prospects for “artificial general intelligence”, but I’m not going to take that on here, so I will focus on the near term.

We can expect further improvements in large language models. There will undoubtedly be improvements in efficiencies as techniques are refined and the fundamental understanding of how they work is improved. We’ll see more specialised training sets, that might improve the (currently somewhat shaky) reliability of the outputs.

There is one issue that might prove limiting. The rapid improvement we’ve seen in the performance of large language models has been driven by exponential increases in the amount of computer resource used to train the models, with empirical scaling laws emerging to allow extrapolations. The cost of training these models is now measured in $100 millions – with associated energy consumption starting to be a significant contribution to global carbon emissions. So it’s important to understand the extent to which the cost of computer resources will be a limiting factor on the further development of this technology.

As I’ve discussed before, the exponential increases in computer power given to us by Moore’s law, and the corresponding decreases in cost, began to slow in the mid-2000’s. A recent comprehensive study of the cost of computing by Diane Coyle and Lucy Hampton puts this in context [2]. This is summarised in the figure below:

The cost of computing with time. The solid lines represent best fits to a very extensive data set collected by Diane Coyle and Lucy Hampton; the figure is taken from their paper [2]; the annotations are my own.

The highly specialised integrated circuits that are used in huge numbers to train LLMs – such as the H100 graphics processing units designed by NVIdia and manufactured by TSMC that are the mainstay of the AI industry – are in a regime where performance improvements come less from the increasing transistor densities that gave us the golden age of Moore’s law, and more from incremental improvements in task-specific architecture design, together with simply multiplying the number of units.

For more than two millennia, human cultures in both east and west have used capabilities in language as a signal for wider abilities. So it’s not surprising that large language models have seized the imagination. But it’s important not to mistake the map for the territory.

Language and text are hugely important for how we organise and collaborate to collectively achieve common goals, and for the way we preserve, transmit and build on the sum of human knowledge and culture. So we shouldn’t underestimate the power of tools which facilitate that. But equally, many of the constraints we face require direct engagement with the physical world – whether that is through the need to get the better understanding of biology that will allow us to develop new medicines more effectively, or the ability to generate abundant zero carbon energy. This is where those other areas of machine learning – pattern recognition, finding relationships within large data sets – may have a bigger contribution.

Fluency with the written word is an important skill in itself, so the improvements in productivity that will come from the new technology of large language models will arise in places where speed in generating and assimilating prose are the rate limiting step in the process of producing economic value. For machine learning and artificial intelligence more widely, the rate at which productivity growth will be boosted will depend, not just on developments in the technology itself, but on the rate at which other technologies and other business processes are adapted to take advantage of AI.

I don’t think we can expect large language models, or AI in general, to be a magic bullet to instantly solve our productivity malaise. It’s a powerful new technology, but as for all new technologies, we have to find the places in our economic system where they can add the most value, and the system itself will take time to adapt, to take advantage of the possibilities the new technologies offer.

These notes are based on an informal talk I gave on behalf of the Productivity Institute. It benefitted a lot from discussions with Bart van Ark. The opinions, though, are entirely my own and I wouldn’t necessarily expect him to agree with me.

[1] J.W. Scannell, Eroom’s Law and the decline in the productivity of biopharmaceutical R&D,
in Artificial Intelligence in Science Challenges, Opportunities and the Future of Research.

[2] Diane Coyle & Lucy Hampton, Twenty-first century progress in computing.

[3] For a semi-technical account of how large language models work, I found this piece by Stephen Wolfram very helpful: What is ChatGPT doing … and why does it work?