An index of issues in UK science and innovation policy – part 1: the strategic context

We’re in the mid-term of a government that’s placed a lot of emphasis on science and innovation for the future of the country. There’s been a lot of rhetorical ambition and some snappy slogans (“science superpower”, “innovation nation”). There’s also been a lot of change in the way the nation’s science system is wired up, and much of that change is yet to work through. In this series of four posts, I’m going to try to give an overview of where this process of change has got to and what is yet to evolve. I’ll be covering a lot of ground, so the posts will form more of an index of issues than a comprehensive discussion; each item I’ll mention will undoubtedly deserve a longer piece of its own.

Since any strategy should begin with a clear view of what one is trying to achieve, the first part of the series, below, will think about the wider challenges the UK government faces, asking what problems we need our science and innovation system to contribute to solving.

The second part will discuss some big questions about how the UK’s science and innovation system works.

  • How research intensive should the UK economy be?
  • Do we have the right balance between basic research and more applied development?
  • Is the overall productivity of the research enterprise – in the UK and globally – growing, or falling?
  • What’s the relationship of geography to innovation?
  • How can national and regional economies be best equipped to take advantage of advances in science and technology made elsewhere?

The third instalment will get into the details of the UK’s R&D system as it is currently evolving, discussing what is changing, and what is yet to be resolved.

  • In government support of R&D, what’s the right balance between direct commissioning and tax incentives?
  • How well are the government’s agencies for supporting R&D (e.g. UK Research and Innovation, InnovateUK, the new agency ARIA, EU funding (or whatever replaces it), government departments) working, and how should they develop?
  • Is the UK’s landscape of institutions where R&D is carried out optimal?
  • Is the connection between policies for skills development and policies for innovation working well enough?
  • How can a national government influence business R&D largely carried out by multinational companies?
  • What should we make of the government’s focus on “missions” and “technologies” to organise innovation policy?
  • How should the government use science and innovation policy to reduce the UK’s regional productivity disparities?
  • Can the new National Science & Technology Council effectively develop national strategy for science and innovation, and convert that into implementable policy across the whole of government?

Finally, I’ll sum up by asking what it might mean for the UK to be a “science superpower”. Given the UK’s current position in the world as a medium size economy, accounting for about 2-3% of the worlds added value from knowledge & technology intensive industries, what is a realistic aspiration for the UK’s science and innovation system?

With that introduction, on to part 1 of this series of blogposts.

1. What problem(s) are we trying to solve?

1.1. Getting the UK economy growing again

The UK’s most serious economic problem now is its lack of productivity growth. As I’ve discussed many times here, after many decades in which productivity grew at a steady rate, a little more than 2% a year, this growth was arrested after the global financial crisis in 2008; since then productivity has been more or less stagnant. This translates directly into a stagnation in average wages, as the first plot shows – this is the painful backdrop to the current “cost of living crisis”.

Productivity and average wages since 2000. From Build Back Better: our plan for growth, HMT, March 2021

This stagnation almost certainly has more than one cause. There may be some general factors affecting all developed economies. Progress in some areas of technology may be slowing; the exponential growth in computer power that came from the combination of Moore’s law and Dennard scaling came to an end in the mid-2000s, for example. But, while productivity growth in all developed countries has slowed, the stagnation has been more pronounced in the UK than any other advanced economy except Italy.

Structural effects specific to the UK include the rapid fall-off of North Sea oil and gas production since the early 2000s, and the unwinding of the bubble in financial services that burst in the global financial crisis. The combination of North Sea oil and the financial services boom may have led to a touch of “Dutch disease”, squeezing other sectors such as manufacturing. There have been difficulties in specific sectors that the UK has been specialised in – notably pharmaceuticals.

There’s an effect of some policy choices over the last decade; macroeconomists focus on the role of demand in driving productivity growth, so the effects of the fiscal consolidation of the early 2010s may themselves have contributed. Other economists highlight the beneficial role of international trade in driving productivity growth, so the choice to impose additional frictions on international trade will give an additional headwind over coming years.

But the fundamental driver of productivity growth is innovation, which finds ways of reducing the inputs needed to produced existing goods and services, and develops entirely new, highly valued goods and services. Not all innovation arises from formal research and development, but it is striking that the UK’s decline in productivity growth follows a period in which the overall R&D intensity of the UK economy declined substantially, and that the UK’s weak performance in productivity growth compared to international comparator countries is correlated with comparatively low R&D intensity.

In terms of productivity, the UK is a highly divided country. The Greater Southeast – London, the Southeast, parts of East Anglia – has an economy with a comparable level of productivity to other high performing Northern European economies, but most of the rest of the country more closely resembles Southern Italy, Spain or Portugal. Moreover, the UK’s large second tier cities – Birmingham, Manchester, Glasgow etc – instead of being drivers of the national economy, actually have levels of productivity below the national average.

Without a recovery of productivity growth, wages will continue to stagnate, living standards will fall, and it will be impossible for governments to provide public services of a quality that people have come to expect. It will not be possible for one corner of the nation to carry the economy of the whole country, so it should be a priority to raise the productivity of those parts that are currently lagging behind their potential – particularly the UK’s large, second tier cities. This is the pre-eminent economic driver that the development of science and innovation policy needs to focus on.

1.2. Managing the energy transition to net zero

All western economies and lifestyles depend on the availability of cheap, abundant energy – and this has been supplied by fossil fuels, which still account for around 80% of our energy supplies. But our dependence on fossil fuels has driven accelerating and potentially disruptive climate change. There’s widespread agreement about the need for our energy system to make a transition to one that stabilises the output of greenhouse gases, and in the UK a commitment to producing net zero greenhouse gases by 2050 is rightly enshrined in legislation. But it isn’t clear to me that policy makers and politicians fully understand the scale of this challenge.

The UK has made some good progress in decarbonising its energy economy, but naturally the UK has done the easy bits first. We’ve exported much of our heavy industry, shifted electricity generation from coal to gas, and we now get roughly half of our electricity generation from a combination of burning biomass, offshore wind and the continuing operation of legacy nuclear power stations.

What remains will be much more difficult. The majority of our energy use still comes from directly burning oil and gas, for transport and domestic and industrial heat. We need to reduce demand by much more focus on energy efficiency, especially in heating. This will need a major drive to retrofit existing commercial and residential buildings, and a large scale programme building out new, zero-carbon social housing, with the remaining heating needs being met by electric heat pumps

The transition to electric vehicles needs to accelerate; heavy goods vehicles and shipping may need to transition to hydrogen or ammonia, while in my view long-haul aviation will only be viable powered by synthetic, zero carbon hydrocarbons (e-fuels). These new fuels themselves need to be synthesised in a zero carbon way – hydrogen by electrolysis using renewable energy and/or high temperature process heat from high temperature nuclear reactors, synthetic hydrocarbons from green hydrogen and carbon dioxide captured directly from the atmosphere.

In addition to being totally decarbonising our electricity supply, we’ll need to substantially expand it to accommodate this transition from directly burnt oil and gas to electricity. The heavy lifting in the UK will likely be done by offshore wind, including floating offshore wind. In addition to intermittent renewables, we’ll need both more storage capacity, and sources of zero carbon firm power. For the latter, the choice is between continuing to burn gas, but with carbon capture and storage, and a bigger programme of nuclear new build. For why I think the latter route is both preferable and more likely, see my earlier post: Carbon Capture and Storage: technically possible, but politically and economically a bad idea. We should support fusion R&D (where the UK has a genuine comparative advantage) in case it works, though it isn’t likely, in my view, to make a substantial contribution to the 2050 net zero target.

This is a daunting list, combining some established technologies, some that exist but aren’t yet cheap or deployable at scale enough, some that exist only in principle. The scale of the transition is wrenching, and like all big changes, it will produce winners and losers, both at a national level and geopolitically. We need innovation to drive down the cost of the new, cleaner technologies to the point at which economic forces drive the transition more than political ones.

1.3. Keeping the nation secure in a more dangerous world

We now have a full-scale European war involving a nuclear-armed adversary, reminding us forcefully that one of the primary duties of a state is to keep its people secure. The 2021 Integrated Review of Security, Defence, Development and Foreign Policy reasserted the importance of science and technology as a source of strategic advantage and as a central part of national security. Although the war in Ukraine has called into question some of the assumptions of the Integrated Review, with a painful reminder that the security of our European neighbourhood can’t be taken for granted, this emphasis on security as a key motivation for the state’s involvement in science and technology will surely only strengthen.

The Ukraine war has also reminded us that the geopolitics of energy never went away, and that the resilience of the material base of the economy and our lives can’t be taken for granted. Since the end of the Cold War and the subsequent deepening of globalisation, we have become complacent about the degree of our dependence, as a small country, on imports for energy, food, materials and finished goods. Of course, our prosperity depends greatly on our international trade, so a North Korea-style Juche-UK policy would be ridiculous – but the pandemic reminded us of our dependence on other countries for some essential items like PPE and pharmaceutical precursors, as well as teaching us how sensitive our complex global supply chains have become, with the effects of disruptions in obscure corners rippling out worldwide.


UK government research and development spending by socio-economic objective, for selected sectors. Data: Eurostat (GBARD)

After the end of the Cold War, one of the ways in which we cashed in the “peace dividend” was by reducing the amount of R&D devoted to defense, as my last post discussed in more detail. We’re in a different world now, so, as I wrote there, priorities for the government’s R&D spending will inevitably and rightly be different, with more emphasis on food and energy security, rebuilding sovereign capabilities in some areas of manufacturing; as well as a return to higher spending directly on defence R&D, including new threats to cybersecurity.

1.4. Keeping an ageing population healthy

The pandemic has been a traumatic experience for the nation, with more than 175,000 deaths to date. As the UK went into the pandemic, there was some optimism that its strong position in life sciences would place it in a better position to weather the pandemic than other countries. As it turned out, its record was mixed. On the one hand, there was a successful rapid vaccination programme; on the other, the pandemic was unforgiving in the way it revealed and exacerbated widespread health inequalities.


Life expectancies at birth for males and females in 2020 for England. These are not predictions of how long a baby born in 2020 will live; instead they represent an estimate of the average number of years a baby born in 2020 would live if they experienced the age-specific mortality rates for 2020 throughout their life. Data from Public Health England.

A more complete reckoning of the strengths and weaknesses of the UK’s pandemic response awaits a full inquiry. The plot shows the impact of the pandemic on life expectancies. What is interesting, though concerning, is that even before the pandemic the rate of increase in life expectancies that we’d seen in 1980’s, 90’s and 2000’s had already, after 2010, begun to stall.

The paradox here is that this slow-down in the rate of improvement of life expectancy follows soon after the substantial increase in R&D devoted to health. As always, there are probably many factors at play here. But there is a conceptual muddle, in my view, about the way the UK thinks about its “Life Sciences Strategy”. The problem is that this strategy has two, largely separate, goals, which are sometimes in tension.

One objective of a life sciences strategy is to do the research needed to improve the way healthcare is delivered to the UK’s population, and to address the broader determinants of the health of the public. The other is to support the UK’s pharmaceutical and biotechnology industries. These are important areas of comparative advantage for the UK economy, strong exporting sectors. But in recent years, as I’ve already mentioned, productivity growth in the pharmaceutical sector has markedly slowed down, and given the UK’s specialisation in pharma, this underperformance has made a material contribution to the UK’s overall productivity problem, as demonstrated by this recent research.

So while it is an entirely appropriate piece of industrial strategy to support the pharmaceutical and biotechnology sectors, it’s important also to think about the wider innovation needs of the health and social care system. Nor should we expect the pandemic we have just been through to be the last, so we should pay some attention to making sure we’re better prepared for the next one.

As other priorities for R&D – like national security and net zero – become more pressing, it is going to be more important than ever to be clear about how we set our strategic priorities.

To come next: part 2 – Some overarching questions about the UK’s research & innovation system.

Science and innovation policy in a new age of insecurity

It’s conventional to date the end of the Cold War to the break-up of the USSR, in 1991. Around that time, people started talking about a “Peace Dividend” – the economic benefits that would come as economies like the UK stood down from the partial war footing that they’d been operating under for the previous half-century. In the early 1980’s, the UK was spending more than 5% of its GDP on defence; in the late 80’s there was an easing of international tension, so by 1990 the fraction had already dropped to 4%. The end of the Cold War saw a literal cashing in of the peace dividend, with a fall in defence spending to around 2.5% in the 2000’s. The Coalition’s austerity policies bore down further on defence, with its share of GDP reaching a low point of 1.9% in 2017 [1].

But I’d argue that there was more to the post-cold-war peace dividend than the simple “guns or butter” argument, by which direct spending on the military could be redirected towards social programmes or to lower taxation. In the apparent absence of external threats, there is less pressure on governments to worry about the security of key inputs to the life of the nation. Rather than worrying about the security of food or energy supplies, there was a conviction that these things could be left to a globalising world market. Industries once thought of as “strategic” – such as semiconductors – could be left to fend for themselves, and if that led to their dismemberment and sale to foreign companies (as happened to GEC/ Marconi), then that could be rationalised as simply the benign outcome of the market efficiently allocating resources. Finally, in the apparent absence of external threats, for a government with an ideological prior for reducing the size of the state, a general run-down of state capacity seemed, if not an actual goal, to be an acceptable policy side-effect.

Things look different now. The invasion of Ukraine has brought Russia and NATO close to direct confrontation, and the resulting call for economic sanctions against Russia has shone a spotlight on the degree to which Europe has come to depend on Russia for supplies of oil and gas. This all takes place against the background of what’s starting to look like a chronic world polycrisis. As the effects of climate change become more obvious, we will see patterns of agriculture disrupted, stress on water supplies, people and communities driven from their homes. We’ll have to be more realistic about confronting the scale and speed of the necessary transition of our energy economy to net zero greenhouse gas emissions, and the economic and political disruptions that will lead to. And the pandemic that’s dominated our lives for the last couple of years isn’t likely to be the last.

The assumptions that became conventional wisdom in the benign years of the 1990s are now obsolete. I’m not convinced that politicians and opinion formers fully understand this yet. We not starting from a great place; the UK’s economy is already well into a second decade of very poor productivity growth, as I’ve been pointing out here for some time. This has led to a long period in which wages have stagnated; this is about to combine with a burst of inflation to produce an unprecedented drop in people’s real living standards. We are going to see increases in defence spending; there is more talk of resilience. But the talk of many senior politicians seems to speak more of a hope of a return to those benign years, more of a clinging on to old nostrums than a real willingness to rethink political and economic principles for these new times.

I don’t have any confidence that I know what those new political and economic principles should be. Here I’m just going to make some observations relevant to the narrower world of science and innovation policy. How might this new environment affect the priorities the state chooses for the science and innovation it supports?

UK government research and development spending by socio-economic objective, for selected sectors. Data: Eurostat (GBARD)

My plot shows the way the division of UK government R&D between different socio-economic goals has evolved since 2004 [2]. The big story over the last twenty years has been the run down of defence R&D, and the increase in R&D related to health – another under-appreciated dimension of the peace dividend. In the heyday of the UK’s postwar “Warfare State” (as the historian David Edgerton has called it [3]), defence accounted for more than half of government R&D spending. As late as 2004, defence still accounted for 30% of the government’s expenditure, but by 2019 this fraction had fallen to 11%.

As for those other sectors that the warfare state would have regarded as of strategic importance – energy, agriculture and industrial production – by the mid-2000s, their shares of R&D had been run down to a few percent, or, in the case of energy, a fraction of a percent. Since then there has been some recovery – in the mid-2000’s, the government’s chief scientific advisor, Sir David King, very much aware of the issue of climate change, was instrumental in restoring some growth in energy R&D from the very low base it had reached. Meanwhile industrial strategy has made a slow and halting comeback, with increased government funding to agencies such as InnovateUK and collaborative initiatives in support of the aerospace and automotive sectors.

Now, the security of the nation, using that word in its broadest sense, no longer seems something we can take for granted. This will inevitably and rightly affect priorities for the government’s R&D spending. More emphasis on food and energy security, with a focus on driving down the cost of the transition to net zero; rebuilding sovereign capabilities in some areas of manufacturing; as well as a return to higher spending directly on defence R&D (including new threats to cybersecurity); these shifts seem inevitable and probably need to happen on a faster time-scale than many will be comfortable with. We won’t return to the world of the Warfare State, nor should we, but I don’t think the institutions we currently have are the right ones for a science and innovation policy driven by security and national resilience.

[1] Defense as fraction of GDP: SIPRI
[2] Gbard data from Eurostat.
[3] David Edgerton, Warfare State: Britain 1920 – 1970 (CUP). Historic defence R&D figures quoted on p259.

Levelling Up Research and Development

The government published its long-awaited “Levelling Up” White Paper on February 2nd. This is a much expanded version of a piece I wrote for Research Fortnight, “Levelling Up R&D is about spreading power as well as money”, in which I look at the implications of the White Paper for research and development.

What do the government mean by “levelling up”?

It’s fair to say that the Levelling Up White Paper, published after a long delay last week, struggled to compete with other political events for the front pages, and what comment there was about it focused, somewhat unfairly, on its great length, its thumbnail history of 9 millennia of urbanism, and its invitation to compare our Northern and Midland cities with renaissance Florence. But for those interested in the UK’s research and development landscape, it would be wrong to underestimate its significance.

For those who have been wondering what “levelling up” actually means, the White Paper does offer some concrete answers. Correctly, in my view, it puts the UK’s regional disparities in productivity centre stage. It offers some detailed analysis of the UK’s regional problem; for all that “industrial strategy” is not a “brand” currently in fashion with the government, the mark of the prematurely terminated Industrial Strategy Council, as transmitted through the person of Andy Haldane, now head of the government’s “Levelling Up Task Force”, is clear. In addition to much discussion of the data, there is an analytical framework based on a consideration of different types of capital, their unequal distribution across the country, and – crucially – a discussion of the vicious circles that can lead places into a self-reinforcing decline.

The role of research and development in levelling up

Research and development, together with skills, contribute to a place’s “intangible capital”, and, echoing an argument myself, Tom Forth and others have been making for some time, the connection is made between the unbalanced distribution of government R&D expenditure across the country and regional disparities in economic performance. Under the overarching goal of boosting “productivity, pay, jobs and living standards by growing the private sector, especially in those places where they are lagging”, one of the 12 “Levelling Up Missions” promises that public investment in R&D outside the Greater South East will grow by 40% by 2030, and by one third in the current spending review period (i.e. by FY24/25). The aspiration is that this increase in public sector R&D should “crowd in” roughly double the amount of private sector R&D.

Although it’s been pointed out that in many areas the Levelling Up White Paper isn’t supported by new money, this is not actually true for R&D. The October 2021 Comprehensive Spending Review did commit the government to a substantial rise in government R&D spending – about £5 billion, taking spending from a bit less than £15 billion to £20 billion. The commitment to a spending uplift of a third outside the Greater SE doesn’t, therefore, represent a real rebalancing of the current situation, and it actually represents a dilution of the commitment of the October Spending Review, which promised that “an increased share of the record increase in government spending on R&D over the SR21 period is invested outside the Greater South East”.

Given that the CSR commitment is reiterated in the White Paper, perhaps we should regard these targets as represents a floor to to the ambition, and they do at least mean that the current imbalances won’t get any worse. To be fair to those managing science funding agencies, it should also be recognised that, given that they have already taken on substantial multi-year commitments, changing the overall distribution of funding isn’t something that can happen very quickly.

Where is this new money going? It’s important to remember that, although academics tend to focus on the research councils, much public R&D is carried out by government departments. When we look at where the uplift in funding is concentrated, we find that the core UKRI budget, comprising the research councils and block grants to universities and research councils, does have an increase of £1.1 bn between 21/22 and 24/25, a 23% increase in cash terms. In percentage terms, though, the really big winners are departments like the Ministry of Defence, The Department of Health and Social Care, and the agency Innovate UK, which between them see an increase of 64%, representing a £2.7bn cash uplift.

If the promised one third increase in R&D spending outside the Greater Southeast is to be funded from the uplift committed in the spending review, much of the heavy lifting will need to be done by these more mission focused and applied funding streams. However, it is fair to say that the details of how this commitment will be met are not yet fully fleshed out in the White Paper.

How UKRI can support levelling up (and why it should)

UKRI gets a new organisational objective, to “Deliver economic, social, and cultural benefits from research and innovation to all of our citizens, including by developing research and innovation strengths across the UK in support of levelling up”, and an instruction to increase consideration of local growth criteria and impact in R&D fund design. It will be interesting to see how the organisation responds to this new mandate. Some may feel that UKRI shouldn’t take explicit measures to rebalance the distribution of R&D across the country, as this might compromise its primary commitment to funding excellence in science. I would certainly agree that UKRI needs to avoid funding poor quality research, but I’d make two points about the question of excellence.

The first is to point out that, for many in the research community, the most prominent funder of purely excellence based research for the UK at the moment isn’t part of UKRI at all, but is the European Research Council, with its mission to “support investigator-driven frontier research across all fields, on the basis of scientific excellence”. The ERC delivers this mission through the rigorous peer-review, by acknowledged international experts, of proposals whose quality is driven up by healthy competition from a whole continent’s worth of leading scientists. Much of UKRI funding, in contrast, is influenced by strategic priorities set by the research councils themselves. Of course, the UK’s ongoing participation in the ERC, like other parts of the EU Horizon programme, is now under serious question, but that’s another story. Perhaps the most important lesson we can take from the success of ERC in supporting excellence, though, is that it is entirely people-focused. Places aren’t excellent, people are.

The second point is to note that UKRI’s current aspiration is to be a steward of the whole research system. This stewardship certainly should include supporting excellent researchers wherever they are to be found, but it should also involve creating the environments that support those researchers. Research councils have in the past, quite correctly, taken on some responsibility for building those environments, so it is a natural extension of those activities to widen the range of places which do provide those environments. They can do this by building capacity, and by developing partnerships.

UKRI has a responsibility for maintaining the capacity of the UK system to do good research. This starts with helping to provide the infrastructure for that research, whether that is by providing funding for strategic equipment in universities, (in England) the block grant support to universities through quality related funding (QR) from Research England, through to creating and supporting entire research institutes. Research councils have rightly intervened to maintain the UK viability of some fields of research deemed strategically important. And there is a continuous, entirely justified, commitment to support the talent pipeline for research, and supporting good training environments for PhD students has become an increasingly important part of the research council’s business. It is a natural extension to UKRI’s stewardship of the UK’s future research capacity to give more weight to the geographical dimension of building that capacity.

Partnership remains another very important dimension of UKRI’s work, both internal and external. The whole point of creating UKRI as a single organisation was to promote more partnership working between the component parts of the organisation, the research councils, Research England, and Innovate UK. Research councils like EPSRC are rightly proud of the large proportion of their grants that involve some partnership with industry, and high profile recent initiatives include EPSRC’s Prosperity Partnerships, large scale research programmes with matched funding from industry, to deliver a research agenda co-created by academic and industrial researchers.

It’s welcome that the most recent prosperity partnership call offers an invitation to articulate the degree to which these partnerships support place-based outcomes, such as attracting inward investment to specific regions or otherwise supporting regional economic growth. It would be a natural extension to include regional bodies more explicitly as partners for research council supported research, and as co-creators of research strategy, in the way that R&D intensive companies currently engage with UKRI.

Innovate UK has different drivers from the research councils; as an explicitly “business led” agency one might expect there to be some correlation between the regional distribution of business R&D and Innovate UK’s investments. The relationship is shown in my figure –regions to the left of the line receive more Innovate UK money than you would expect from a simple correlation with business R&D, regions on the right receive less.


A comparison of Innovate UK expenditure with business R&D for 2018. Innovate UK data from UKRI, regional business R&D data from ONS BERD statistics.

This distribution doesn’t immediately suggest a straightforward explanation. One might wonder whether it reflects industrial sectors that are particularly well organised to receive support from Innovate UK – perhaps the importance of automotive for the West Midlands and aerospace for the South West and East Midlands is reflected in the above average Innovate UK support for those regions.

Another factor in determining these spending patterns is the location of the Catapult Centres. These receive core funding directly from Innovate UK, as well as participating in joint projects with industry that receive partial funding from Innovate UK, so the regional distribution of Innovate UK funding probably to some extent reflects the location of Catapult Centres.

It’s possible that the high London figure to some extent reflects spending being registered at Head Offices rather than in the R&D centres where the work is actually carried out. I don’t have an obvious explanation for the relatively low level of spending in the South East and East.

The expectation of a correlation between existing business R&D spend and Innovate UK investments rests on the idea that Innovate UK should be led by the business R&D landscape as it is, rather than trying to shape it. But if the goal of “levelling up” is to increase productivity in underperforming regions, then perhaps the goals of innovation policy should include the use of applied R&D, together with other interventions to promote innovation diffusion and workforce development, explicitly to develop innovation and manufacturing capacity, as Eoin O’Sullivan and I have argued in our recent submission to the Nurse review. As we outline in our paper, this could be done by Innovate UK through the Catapult Network, but this would require some explicit modifications in their mission and in the selection criteria for new Catapults.

One programme run by UKRI in recent years is designed to build regional capacity in this way, through partnership with research organisations and industry in particular places. This is the “Strength in Places Fund”, administered as a partnership between Innovate UK and Research England. I believe this scheme has been a success in the collaborations it has generated, although the bureaucracy surrounding it has been frustrating. It’s very disappointing that the White Paper lacked any commitment to continue this scheme, currently the only explicitly place-based funding instrument run by UKRI. Hopefully this will be remedied in the ongoing detailed discussions of the CSR settlement; if not it will inevitably interpreted as a signal of the UKRI’s lack of commitment to addressing regional R&D imbalances.

R&D for levelling up health inequalities

Turning to non-UKRI funding streams, one that does receive a very substantial uplift is the National Institute of Health Research, the research arm of the Department of Health and Social Care. The White Paper, entirely correctly, draws attention to the shocking disparities in health outcomes across the country, which amount to about a decade of life expectancy between the most and least prosperous parts of the country, and sets as another of the 12 “Levelling Up missions” the goal of narrowing this gap. I believe these health inequalities are not just morally unacceptable in a prosperous country, they are themselves direct contributors to productivity gaps. Further details of how this aspect of levelling up will have to wait until another White Paper, promised for later in the year.

Narrowing these gaps should be a key focus of the National Institute of Health Research – yet of all the research funding agencies, this is the one whose funding is most concentrated, not just in the Greater Southeast, but specifically in London. There is in the White Paper a commitment that NIHR will “bring clinical and applied research to under-served areas and communities in England with major health needs to reduce health disparities”, but the targets are currently vague. This refocusing NIHR on the urgent problem of health inequality needs to go further and faster.

Innovation policy to support levelling up needs to be co-created with cities, regions and nations

But making a material change in the distribution of R&D funding using existing mechanisms will always be a hard and slow process. Tom Forth and I, in our 2020 NESTA paper “The Missing Four Billion”, argued that to make a real difference, it will be necessary to devolve significant funding to nations, cities and regions. To make an impact on productivity, R&D interventions need to work with the grain of the existing regional economic base, and even the best central government departments and agencies, in Whitehall or Swindon, can’t be expected to have the local knowledge and develop the partnerships that would make this work. Ultimately, developing an effective regional innovation strategy is a matter of finding out who is doing the innovating, and helping them do more of it.

On the other hand, the necessary organizational and analytical capacity to make good decisions about innovation don’t always exist or aren’t fully developed in our cities and regions. Our answer to this dilemma was the idea of an “Innovation Deal”, where central government work with cities and regions to develop this capacity, in return for substantial devolution of innovation funding.

The White Paper does take a tentative step in this direction. Three “Innovation Accelerators” have been announced, with £100m of funding over three years going to three pilot areas, Greater Manchester, the West Midlands, and Glasgow City-Region. The idea is that national and local government, together with industry and R&D institutions in those cities, will work together to develop projects to improve the strength of the existing R&D base and maximise its economic impact, to attract new investment from international companies at the technological frontier, and to improve the diffusion of technology into the existing business base.

In Greater Manchester, we’ve been preparing for such a programme. The organisation “Innovation Greater Manchester”, which brings together the private sector, universities and the Mayoral Combined Authority, has been developing a pipeline of rigorously tested investment opportunities aimed at driving productivity across the whole of GM. This needs to support not just the city centre economy, where digital and creative industries are currently thriving, but the economies of GM’s outlying boroughs like Rochdale, Bury and Oldham, which are amongst the most deprived communities of the Northwest. Here, initiatives like the Advanced Machinery and Productivity Institute (supported by the Strength in Places Fund) will help existing innovative manufacturing businesses to develop and grow. Innovation GM will work with central government to develop its “Innovation Accelerator” as an exemplar of locally informed innovation policy.

R&D is an important element of the productivity-enhancing investments that should be at the centre of the levelling-up agenda, and it’s right that the Levelling Up White Paper sets as one of its missions the need to increase the R&D intensity – both public and private – of those parts of the country currently not fulfilling their economic potential. Work does remain to translate some of the high level commitments on R&D spending into changes in the way government departments and agencies spend their R&D budgets.

In addition to this, I believe that co-creation – and ultimately devolution – of innovation programmes with city-regions will be important, to incorporate local knowledge about the existing economy, and ultimately to assign local responsibility for the outcomes. Innovation Accelerators are a good first step to develop the institutional landscape for this to work, and I hope that this initiative can soon be rolled out to include other areas of the UK.

Will the government’s interest in regional economic disparities be sustained for the long-term?

Finally, one of the most telling sections of the Levelling Up White Paper is a history of 100 years of local growth policy, with the comment “spatial policy in the UK has, by contrast, been characterised by endemic policy churn…. By some counts, there were almost 40 different schemes or bodies introduced to boost local or regional growth between 1975 and 2015, roughly one every 12 months.” Surely no-one can argue with the White Paper’s call for policies to be applied consistently at sufficient scale over the medium to long term. Addressing the UK’s fundamental problems of disparities in regional economic performance must surely take a project that will take decades, and it’s realistic that the White Paper defines milestones for 2030.

But what will be the longevity of this White Paper itself? Of course, 2030 is beyond the life of this government – but given the prevailing political instability, some may doubt that the “Levelling Up” agenda will even last the year. It’s odd that a central manifesto commitment of a government elected with an 80 seat majority should be in doubt, but it’s not clear that enthusiasm for the agenda is universally shared across the ruling party. Many influential people and institutions doubt that the reduction of the UK’s regional economic disparities is possible or even desirable. In fact, many people and institutions benefit from the current state of affairs, and there’s a surprisingly large constituency for economic stagnation.

So I wouldn’t be surprised if some people in government and its agencies will be tempted to drag their feet, in the hope that if they wait long enough the entire “levelling up” craze will go away. Naturally, I think this would be wrong in principle – the role of regional economic disparities in the UK’s current economic difficulties, and the resulting societal instability, has become more and more obvious and widely acknowledged. I suspect that such foot-dragging would be politically unwise too.

What’s missing in the UK’s R&D landscape – institutions to build innovation capacity

The UK government has commissioned a new review of the institutional landscape in which research, development and innovation (RD&I) is carried out, led by Sir Paul Nurse. In response to an invitation for views, Eoin O’Sullivan, from Cambridge University’s Institute for Manufacturing, and I submitted this brief paper:
The role of intermediate RD&I institutes in building regional and sectoral innovation capabilities (PDF).

Our paper argues that what’s underdeveloped in the UK’s research landscape are research and development institutes whose mission goes beyond just doing applied research, to encompass a wider range of activities to build the innovation and manufacturing capabilities of regional economies that are currently underperforming. There are many international examples of this kind of institution, which carry out workforce development and innovation diffusion functions as well as applied research, and there are lessons from these other countries that the UK could usefully learn.

Here’s the first section of our paper:

The place of intermediate institutions in the UK’s RD&I landscape

National innovation systems have a complex landscape of different types of research institutes with different missions and goals. These include both research universities and institutes devoted to fundamental science, and public sector research establishments (PSREs), which support government strategic goals. A majority of research, development and innovation takes place in the private sector, in firms’ own laboratories, and in for-profit contract research organisations. It is this private sector innovation that most directly drives productivity growth. Public and private sector R&D can be connected in intermediate RD&I institutes, which carry out more applied research, often as a public/private partnerships, as well as taking a wider role in building regional and sectoral private sector capability, through the promotion of innovation diffusion and skills development.

In the absence of government intervention, the private sector will systematically invest less in R&D than would be optimal for the whole economy, due to the inability of firms to capture all of the benefits. This market failure provides the justification for government investment in R&D. In many successful innovation economies, intermediate RD&I institutes play a vital role. Examples include the Fraunhofer Institutes in Germany, the Industrial Research and Technology Institute in Taiwan, and VTT in Finland.

In the UK, basic research is carried out in a strong university base, supplemented by some stand-alone institutes, such as the Laboratory of Molecular Biology at Cambridge and the Crick Institute in London. The PSRE sector has diminished in size over the past few decades, because of privatisations and absorption of some institutes into universities, but it retains some strong institutions such as the National Physical Laboratory and the Meteorological Office.

The perceived weakness of the UK’s landscape in intermediate research and innovation institutions led to the development of the Catapult Network in the 2010’s, modelled in some respects on Germany’s Fraunhofer network, though not as yet commensurate with it in scale.

Discussion of the purpose of Intermediate RD&I institutions in the UK, such as the Catapult Network, has focused on their role carrying out applied research in collaboration with industry. The purpose of this note (which summarises the argument of a longer working paper current under preparation for the Productivity Institute) is to draw attention to the wider range of functions that such institutions carry out in other nations, and in particular their role in supporting economic development in regions with lower productivity.

The rest of the paper can be found here: The role of intermediate RD&I institutes in building regional and sectoral innovation capabilities (PDF).

Video of my lecture on “levelling up” R&D

On the 9 February I did a lecture at the think-tank Policy Exchange, on the subject “Can we level up research and innovation?”.

The talk had three parts:

  • On the relationship between Research and Development, productivity and regional growth: why it’s important to level up R&D;
  • On levelling up R&D in the White Paper: what government has committed to – and what remains to be done;
  • On levelling up R&D in practice in a city region: Innovation Greater Manchester and the Innovation Accelerator pilot.
  • The lecture can be watched on YouTube here, and the slides can be downloaded here: (PDF) Levelling up R&D.

    When the promise of economic growth is not fulfilled

    It’s been widely reported that the government is considering lowering the earnings threshold at which people need to start paying back their student loans. Let’s leave aside, for a moment, the question of whether it’s good economic sense for some graduates, at relatively early stages of their careers, to be facing very high effective marginal tax rates, or indeed bigger questions of the fairness of the current split in tax burden between young and old. The fundamental reason this change is having to be considered reflects the fact that, contrary to the expectations of economists and the experience of the rest of the post-war period, average wages in the UK have been stagnant for a decade. Worsening terms for student loans represent just one example of the way we’re starting to see the unfulfilled promise of continued economic growth having depressing and unwelcome real-world effects.

    The key number in understanding the UK’s byzantine student finance system is the so-called RAB charge. When a student goes to university, the Government fronts up a fee to the university – currently £9250 a year (except in Wales) – and in some circumstances advances a loan for living expenses. In return, the student agrees to repay the loan in monthly instalments that depend on their income, with any unpaid portion of the loan being written off after 30 years. So, the fraction of the money the government doesn’t get back depends on the average level of wages, projected 30 years into the future. The less wages rise, the higher the fraction of the loan the government doesn’t recover. This fraction is known as the RAB charge, and is counted as a cost in the government’s accounts.

    When the current student loan scheme was introduced by the Coalition government in 2012, the RAB charge was expected to be about 30%. As the years went on, this number increased: for 2014, it was estimated at 45%, and by 2020, the RAB charge stood at 53% – the government expected less than half of the student loans advanced that year to be repaid. The total advanced by the government under the student loan scheme that year was £19.1 billion, so under the original assumptions of the scheme, the cost to the government would have been £5.7 billion. Instead, under the current assumptions, the cost is now more than £10 billion, largely due to the failure of the average wage growth anticipated in 2012 to materialise.

    Much of the discussion around the cost of the student finance system now revolves around the calculated return to individual degrees, by subject and institution. The creation of a large data-set linking subject studied to income achieved makes it possible to identify those degrees that provide the highest and lowest financial returns. This is fascinating and useful data, but there’s a danger of misinterpreting it, to suggest that the problem of the high cost to the government of the current HE funding system is the result of bad choices by individuals, and of universities offering “poor value” degrees. Instead, the fundamental issue is a collective one, of the economy’s failure over the last decade to deliver the progressively rising wages we had come to expect in the post-war period.

    It’s clear from the data that if an able individual wants to maximise their earning power, they should do a degree in economics rather than, say, music. But from that, it doesn’t follow that the nation would be more prosperous if every student studied economics. There is an issue about how to find the optimum distribution of subjects studied that matches the changing needs of an economy, but the first order determinant of the overall cost of the HE funding system is the average wage that the economy can sustain. The problem we have isn’t low value degrees, it’s a low value economy.

    The reason wages have been stagnant is straightforward – the regular, year-on-year, increases in productivity we had become accustomed to in the post-war period stopped around the time of the global financial crisis, and have not yet returned. Labour productivity measures the value added, on average, by an hour of work, so given a relatively constant split between the reward to capital and labour, we would expect labour productivity and average wages to track each other quite closely. My first plot – taken from the Treasury’s March 2020 Plan for Growth – shows that this relationship has indeed held quite closely in the UK over the last twenty years.

    The relationship between labour productivity (output per hour) and total labour compensation. From HM Treasury’s Build Back Better: our plan for growth, March 2021.

    As the Treasury said in the March 2021 Plan for Growth, “In the long run, productivity gains are the fundamental source of improvements in prosperity. Productivity is closely linked to incomes and living standards and supports employment. Improvements in productivity free up money to invest in jobs and support our ability to spend on public services.” The corollary of this is that, without productivity growth, we see stagnant living standards, and tighter fiscal conditions, leading to poorer public services. The story of the swelling RAB charge for the student finance system is just one example of the malignant effect of productivity stagnation on public finances.

    It’s conventional wisdom to look back to the 1970’s as the nadir of UK economic performance. But measured by productivity growth, the last decade has been much worse. From 1971 to 2006, labour productivity grew at a remarkably steady rate of about 2.3% a year – and this provided the material for sustained growth in living standards. But since 2010, trend growth has remained stubbornly low – at less than 0.4% a year.

    Labour productivity since 1970, with the latest prediction from the Office of Budget Responsibility. Sources: ONS, OBR Economic and Fiscal Outlook October 2021.

    What are the chances of this dismal trend being broken? The forecasts of the Office of Budgetary Responsibility are based on the expectation of a modest upturn in productivity. The OBR has been predicting a recovery in productivity growth every year since 2010, and this recovery has so far failed to materialise. I’ve written a lot about productivity before (see e.g. The UK’s top six productivity underperformers,
    Should economists have seen the productivity crisis coming? and Innovation, research and the UK’s productivity crisis), and I’ll surely return to the subject. In the meantime, I don’t understand why the OBR think it will be different this time, particularly given the additional headwinds the economy now faces.

    Many – if not most – of the big economic transactions made, both by individuals and by governments, amount to shifting saving and consumption backwards and forwards in time. Whether it’s individuals getting a mortgage on a house, or saving for a pension, or governments borrowing money now on the basis of the expectation of future tax income, we are making assumptions about how our future income, at a personal or national level, will grow. Governments don’t repay the national debt – they hope the economy will grow fast enough to keep the interest payments manageable. If our assumptions about income growth turn out to be over-optimistic the ramifications are likely to be unpleasant. The slow unravelling of the 2012 student finance settlement is just one example.

    With the Commons Science Select Committee on “The role of technology, research and innovation in the COVID-19 recovery”

    The House of Commons Select Committee on Science and technology visited Manchester on 21st September, and I was asked to give oral evidence, with others, to its inquiry on “The role of technology, research and innovation in the COVID-19 recovery”. The full, verbatim, transcript is available here; here are a few highlights.

    My opening statement

    Chair: Perhaps I can start with a question to Professor Jones. Everybody around the world associates Manchester with technology over the ages, but if we look at the figures, the level of research and development spending investment, in the north-west at least, is below the national average. Give us a feeling for why that might be and whether that is inevitable and reflects things that we cannot help or what we should be doing about it, bearing in mind that we will be going into a bit more detail later in the session.

    Professor Jones: On the question of the concentration of research, this is something that has happened over quite a long time. The figure that I have in my mind is that 46% of all public and charitable R&D happens in London and the two regions that contain Oxford and Cambridge. There is no doubt—it is not just a question of Manchester—that the distribution of public research money across the country is very uneven.

    That has been a consequence partly of deliberate decisions—there has been a time when the idea has been, particularly when funding seemed tight, that it would be better to concentrate money in a few centres—but when it is given out competitively without regard for place, there is a natural tendency for concentration. Good people go to where existing facilities are. That allows you to write stronger bids and in that case there is a self-reinforcing element. That process is played out over quite a long time. It has got us to the situation of quite extreme imbalance.

    I have been talking there about public R&D. It is very important to think about private R&D as well. There is an interesting disparity between where the private sector invests its R&D money and where the public sector does. One finds places like Cambridge, which are remarkable places, where there is a lot of public sector R&D but then the private sector piles in with a great deal of money behind that. Those are great places that the country should be proud of and encourage. Particularly in the north-west, in common with the east midlands and west midlands, too, the private sector is investing quite a lot in R&D, but the public sector is not following those market signals and, in a sense, exploiting what in many ways are innovation economies that could be made much stronger by backing that up with more public funding.

    On excellence and places

    Graham Stringer: This is my final question on this section. The drift of great scientists to the golden triangle has been going on for a long time. Rutherford discovered the nucleus of the atom a quarter of a mile down the road in what is now a committee room, sadly. Rutherford left Manchester and went to the Cavendish afterwards. Do you think it is possible to stop that drift, because money also follows great scientists as well as institutions? The University of Manchester is a world-class university, but do you think it is possible to stop that drift and get University of Manchester, and some of the other great northern universities, up the pecking order to be in the same region as Imperial, Oxford and Cambridge?

    Professor Jones: Yes, there is scope to do that. You mentioned Rutherford. I used to teach in the Cavendish myself, so I have made the reverse journey.

    The point that is important, if we talk about excellence, is that people loosely say Cambridge is excellent. Cambridge is not excellent. Cambridge is a place that has lots of excellent people. The thing that defines excellence is people, and people will respond to facilities. If we create excellent facilities, we create an excellent environment, then excellent people from all over the world will want to come to those places.

    It is possible to be too deterministic about this. One can create the environment that will attract excellent people from all over the world. That is what we ought to aim to do if we want to spread out scientific excellence across the country.

    Graham Stringer: To simplify: the answer is for investment in absolutely world-class kit in universities away from the golden triangle?

    Professor Jones: It is world-class kit, but it is also the wider intellectual climate: excellent colleagues. People like to go where there are excellent colleagues, excellent students. That is the package that you need.

    On “levelling-up” and R&D spending

    Chair: As you say, clearly it would not be a step towards achieving the status of a science superpower if we were reducing core budget, so the opportunity to have a greater quantity of regional investment comes from an increase in the budget. Is it fair to infer logically from that that, of the increase, you would expect a higher proportion to be regionally distributed than the current snapshot of the budget?

    Professor Jones: Yes, absolutely. If we take the Government at their word about saying that there are going to be genuine increases in R&D, this does give us a unique opportunity because we have had quite flat research budgets for a couple of decades. Up to now we have always been faced with that problem: do you really want to take money away from the excellence of Oxford and Cambridge to rebalance? That is a difficult issue because, as I said in my opening remarks, Cambridge is a fantastic asset to the UK’s economy. But if we do have this opportunity to see rising budgets, if we are going from £14.9 billion to £22 billion—that is £7 billion of rise that has been pencilled in—it would be very disappointing if a reasonable fraction of that was not ring-fenced to start to address these imbalances, specifically with the aim of boosting the economy of those places with productivity that is too low needs to be raised.

    I think that tying it very directly to the Government’s goals of levelling up, increasing the productivity of economically lagging regions as well as their other very important goals of net zero, would be entirely reasonable.

    Chair: That is literally and specifically what you are describing, is it not—levelling up, in the sense that you have said you do not want to take down the budgets of existing institutions, you want to increase the others? That is levelling up.

    Professor Jones: Indeed.

    Bleach and the industrial revolution in textiles

    Sunshine is the best disinfectant, they say – but if you live in Lancashire, you might want to have some bleach as a backup. Sunshine works to bleach clothes and hair too – and before the invention of the family of chlorine based chemicals that are commonly known as bleach, the Lancashire textile industry – like all other textile industries around the world – depended on sunshine to whiten the naturally beige colour of fabrics like cotton and linen. It’s this bright whiteness that has always been prized in fine fabrics, and is a necessary precondition for creating bright colours and patterns through dyeing.

    As the introduction of new machinery to automate spinning and weaving – John Kay’s flying shuttle, the water frame, and Crompton’s spinning mule – hugely increased the potential output of the textile industry, the need to rely on Lancashire’s feeble sunshine to bleach fabrics in complex processes that could take weeks was a significant blockage. The development of chemical bleaches was a response to this; a significant ingredient of the industrial revolution that is perhaps not widely appreciated enough, and an episode that demonstrates the way scientific and industrial developments went hand-in-hand at the beginning of the modern chemical industry.

    It’s not obvious now when one looks at the clothes in 17th and 18th century portraits, with their white dresses, formal shirts and collars, that the brilliant white fabrics that were the marker of their rich and aristocratic subjects were the result of complex and expensive set of processes. Bleaching at the time involved a sequence of repeated steepings in water, boiling in lye, soaping, soaking in buttermilk (and towards the end of this period, dilute sulphuric acid) – together with extensive “grassing” – spreading the fabrics out in the sun in “bleachfields” for periods of weeks. These expensive and time-consuming processes were a huge barrier to the expansion of the textile industry, and it was in response to this barrier that chemical bleaches were developed in the late 18th century.

    The story begins with the important French chemist Claude-Louis Berthollet, who in 1785 discovered and characterised the gas we now know as chlorine, synthesising it through the reduction of hydrochloric acid by manganese dioxide. His discovery of what he called “dephlogisticated muriatic acid” [1] was published in France, but news of it quickly reached England, not least through direct communication by Berthollet to the Royal Society in London. Only a year later, the industrialist Matthew Boulton and his engineer partner James Watt were visiting Paris; they met Berthollet, and were able to see his initial experiments showing the effect chlorine had on colours, either using the gas directly or in solution in water. The potential of the new material to transform the textiles industry was obvious both to Berthollet and his visitors from England.

    James Watt had a particular reason to be interested in the process – his father-in-law, James McGrigor – owned a bleaching works in Glasgow. Watt had soon developed an improvement to the process for making chlorine; instead of using hydrochloric acid, he used sulphuric acid and salt, exploiting the new availability and relative low cost of sulphuric acid since the development of the lead chamber process in 1746 by John Roebuck and Samuel Garbett. In 1787 he sent a bottle of his newly developed bleach to his father-in-law, and arranged for a ton of manganese dioxide [2] to be sent from Bristol to Glasgow to begin large scale experiments. Work was needed to develop a practical regime for bleaching different fabrics, to find methods to assay the bleaching power of the solutions, and to develop the apparatus of this early chemical engineering – what to make the vessels out of, how to handle the fabric. By the end of the year, with the help of Watt, McGrigor had successfully scaled up the process to bleach 1500 yards of linen.

    Meanwhile, two Frenchmen – Antoine Bourboulon de Boneuil and Matthew Vallet – had arrived in Lancashire from Paris, where they had developed a proprietary bleaching solution – “Lessive de Javelle” – which built on Berthollet’s work (without his involvement or approval). This probably used the method of dissolving the chlorine in a solution of sodium hydroxide, which absorbs more of the gas than pure water. This produces a solution of sodium hypochlorite, like the everyday “thin bleach” of today’s supermarket shelves. In 1788 Bourboulon petitioned Parliament to grant them an exclusive 28 year license for the process (a longer period than a regular patent). This caused some controversy and was strongly opposed by the Lancashire bleachers, but placed James Watt in an awkward position. Naturally he opposed the proposal, but didn’t want to do this too publicly, as his own, very broad, patent (with Matthew Boulton) for the steam engine had been extended by Act of Parliament in 1775, leading to lengthy litigation. Nonetheless, after the intervention of Berthollet himself and the growing awareness of the new science of chemical bleaching in the industrial community, Bourboulon only succeeded in obtaining patents for relatively restricted aspects of his process, that were easily evaded by other operations.

    Claude-Louis Berthollet’s position in this was important, as his priority in discovering the basic principles of chlorine bleaching was universally accepted. But Berthollet was an exponent of the principles of what would now be called “open science” and consciously repudiated any opportunities to profit from his inventions – as he wrote to James Watt, “I am very conscious of the interest that you take in a project which could be advantageous to me; but to return to my character, I have entirely renounced involvement in commercial enterprises. When one loves science, one has little need of fortune, and it is so easy to expose one’s happiness by compromising one’s peace of mind and embarrassing oneself”. Watt was clearly frustrated by Berthollet’s tendency to publish the results of his experiments, which often included rediscovering the improvements that Watt himself had made.

    But by this stage, any secrets were out, and other Manchester industrialists, together with a new breed of what might be called consulting chemists, who kept up with the latest scientific developments in France and England, were experimenting and developing the processes further. Their goals included driving down the cost, increasing the scale of operations, and particularly improving their reliability – it was all too easy to ruin a batch of cloth by exposing it too long or using too strong a bleaching agent, or to poison the workmen with a release of chlorine gas. In fact, one shudders to think about the health and safety record and environmental impact of these early developments. Even by 1795, it still wasn’t always clear that the new methods were cheaper than the old ones, particularly for case of linen, which was significantly more difficult to bleach than cotton. Despite the early introduction of “Lessive de Javelle”, the stability of bleaching fluids was a problem, and most bleachers preferred to brew up their own as needed, guided by lots of practical experience and chemical knowledge.

    Bleaching probably could never be made entirely routine, but the next big breakthrough was to create a stable bleaching powder which could be traded, stored and transported, and could be incorporated in a standardised process. Some success had been had by absorbing chlorine in lime. The definitive process to make “bleaching powder” by absorbing chlorine gas in damp slaked lime (calcium hydroxide), to produce a mixture of calcium hypochlorite and calcium chloride, was probably developed by the Scottish chemist Charles Macintosh (more famous as the inventor of the eponymous raincoat). The benefits of this discovery, though, went to Macintosh’s not wholly trustworthy business partner, Charles Tennant, who patented the material in 1799.

    What are the lessons we can learn from this episode? It underpins the importance of industrial chemistry, an aspect of the industrial revolution that perhaps is underplayed. It’s a story in which frontier science was being developed at the same time as its industrial applications, with industrialists understanding the importance of being linked in with international networks of scientists, and organisations like the Manchester Literary and Philosophical Society operating as important institutions for diffusing the latest scientific results. It exposes the tensions we still see between open science and the protection of intellectual property, and the questions of who materially benefits from scientific advances.

    As the nineteenth century, the textile industry continued to be a major driver of industrial chemistry – the late 18th century saw the introduction of the Leblanc process for making soda-ash, and the nineteenth century saw the massive impact of artificial dyes. These developments influence the industrial geography of England’s northwest to this day.

    [1] When Berthollet discovered chlorine, it was in the heyday of the phlogiston theory, so, not appreciating that what he’d discovered was a new gaseous element, he called it “dephlogisticated muriatic acid” (muriatic acid being an old name for hydrochloric acid). As Lavoisier’s oxygen theory became more widely accepted, the gas became known as “oxymuriatic acid”. It was only in 1810 that Humphry Davy showed that chorine contains no oxygen, and is in fact an element in its own right. Phlogiston has a bad reputation as a dubious pre-scientific relic, but it was a rational way of beginning to think about oxidation and reduction, and the nature of heat, giving a helpful guide to experiments – including the ones that eventually showed that the concept was unsustainable.

    [2] It’s interesting to ask why there was an existing trade in manganese dioxide. This mineral had been used since prehistory as a black pigment, and is unusual as a strong oxidising agent that is widely found in nature. In Derbyshire it occurs as an impure form known to miners as “wad”; when mixed with linseed oil (as you would do to make a paint) it occasionally has the alarming property of spontaneously combusting. This was recorded in a 1783 communication to the Royal Society by the renowned potter Josiah Wedgwood, who ascribed the discovery to a Derby painter called Mr Bassano, and reported seeing experiments showing this property at the house of the President of the Royal Society, Sir Joseph Banks. Spontaneous combustion isn’t a great asset for a paint, but at lower loadings of manganese dioxide a less dramatic acceleration of the oxidation of linseed oil is useful in making varnish harden more quickly, and it was apparently this property that led to its widespread use in paints and varnishes, particularly for ships in the great expansion of the British Navy at the time. More pure deposits of manganese dioxide were found in Devon, and subsequently in North Wales, as the bleach industry increased demand for the mineral further. The material gained even more importance following Robert Mushet’s work on iron-manganese alloys – it was the incorporation of small amounts of manganese that made the Bessemer process for the first truly mass produced steel viable.

    [3] Sources: this account relies heavily on “Science and Technology in the Industrial Revolution”, by A. E. Musson and E. Robinson. For wad, “Derbyshire Wad and Umber”, by T.D. Ford, Mining History 14 p39.

    Edited 23/8/21 to make clear that Bourboulon’s petition to Parliament was for a longer period of exclusivity than a standard patent. My thanks to Anton Howes for pointing this out.

    Reflections on the UK’s new Innovation Strategy

    The UK published an Innovation Strategy last week; rather than a complete summary and review, here are a few of my reflections on it. It’s a valuable and helpful document, though I don’t think it’s really a strategy yet, if we expect a strategy to give a clear sense of a destination, a set of plans to get there and some metrics by which to measure progress. Instead, it’s another milestone in a gradual reshaping of the UK’s science landscape, following last year’s R&D Roadmap, and the replacement of the previous administration’s Industrial Strategy – led by the Department of Business, Energy and Industrial Strategy – by a Treasury driven “Plan for Growth”.

    The rhetoric of the current government places high hopes on science as a big part of the UK’s future – a recent newspaper article by the Prime Minister promised that “We want the UK to regain its status as a science superpower, and in so doing to level up.” There is a pride in the achievements of UK science, not least in the recent Oxford Covid vaccine. And yet there is a sense of potential not fully delivered. Part of this is down to investment – or the lack of it: as the PM correctly noted: “this country has failed for decades to invest enough in scientific research, and that strategic error has been compounded by the decisions of the UK private sector.”

    Last week’s strategy focused, not on fundamental science, but on innovation. As the old saying goes, “Research is the process of turning money into ideas, innovation is turning ideas into money” – and, it should be added, other desirable outcomes for the nation and society – the necessary transition to zero carbon energy, better health outcomes, and the security of the realm in a world that feels less predictable. But the strategy acknowledges that this process hasn’t been working – we’ve seen a decline in productivity growth that’s unprecedented in living memory.

    This isn’t just a UK problem – the document refers to an apparent international slowing of innovation in pharmaceuticals and semiconductors. But the problem is worse in the UK than in comparator nations, and the strategy doesn’t shy away from connecting that with the UK’s low R&D intensity, both public and private: “One key marker of this in the UK is our decline in the rate of growth in R&D spending – both public and private. In the UK, R&D investment declined steadily between 1990 and 2004, from 1.7% to 1.5% of GDP, then gradually returned to be 1.7% in 2018. This has been constantly below the 2.2% OECD average over that period.”

    One major aspiration that the government is consistent about is the target to increase total UK investment in R&D (public and private) to reach 2.4% of GDP by 2027, from its current value of about 1.7%. As part of this there is a commitment to increase public spending from £14.9 bn this year to £22 bn – by a date that’s not specified in the Innovation Strategy. An increase of this scale should prompt one to ask whether the institutional landscape where research is done is appropriate, and the document announces a new review of that landscape.

    Currently the UK’s public research infrastructure is dominated by universities to a degree that is unusual amongst comparator nations. I’m glad to see that the Innovation Strategy doesn’t indulge in what seems to be a widespread urge in other parts of government to denigrate the contribution of HE to the UK’s economy, noting that “in recent years, UK universities have become more effective at attracting investment and bringing ideas to market. Their performance is now, in many respects, competitive with the USA in terms of patents, spinouts, income from IP and proportion of industrial research.” But it is appropriate to ask whether other types of research institution, with different incentive structures and funding arrangements, might be needed in addition to – and to make the most of – the UK’s academic research base.

    But there are a couple of fundamentally different types of non-university research institutions. On the one hand, there are institutions devoted to pure science, where investigators have maximum freedom to pursue their own research agendas. Germany’s Max Planck Institutes offer one model, while the Howard Hughes Medical Institute’s Janelia Research Campus, in the USA, has some high profile admirers in UK policy circles. On the other hand, there are mission-oriented institutes devoted to applied research, like the Fraunhofer Institutes in Germany, the Industrial Technology Research Institute in Taiwan, and IMEC (the Interuniversity Microelectronics Centre) in Belgium. The UK has seen a certain amount of institutional evolution in the last decade already, with the establishment of the Turing Institute, the Crick Institute, the Henry Royce Institute, the Rosalind Franklin Institute, the network of Catapult Centres, to name a few. It’s certainly timely to look across the landscape as it is now to see the extent to which these institutions’ missions and the way they fit together in a wider system have crystallised, as well as to ask whether the system as a whole is delivering the outcomes we want as a society.

    There is one inescapable factor about the institutional landscape we have now that is seriously underplayed – that is that what we have now is a function of the wider political and economic landscape – and the way that’s changed over the decades. For example, there’s a case study in the Innovation Strategy of Bell Laboratories in the USA. This was certainly a hothouse of innovation in its heyday, from the 1940’s to the 1980’s – but that reflected its unique position, as a private sector laboratory that was sustained by the monopoly rents of its parent. But that changed with the break-up of the Bell System in the 1980’s, itself a function of the deregulatory turn in US politics at the time, and the institution is now a shadow of its former self. Likewise, it’s impossible to understand the drastic scaling back of government research laboratories in the UK in the 1990’s without appreciating the dramatic policy shifts of governments in the 80’s and 90’s. A nation’s innovation landscape reflects wider trends in political economy, and that needs to be understood better and the implications made more explicit.

    With the Innovation Strategy was published a “R&D People and Culture Strategy”. This contains lots of aspirations that few would disagree with, but not much in the way of concrete measures to fix things. To connect this with the previous discussion, I would have liked to have seen much more discussion of the connection between the institutional arrangements we have for research, the incentive structure produced by those arrangements, and the culture that emerges. It’s a reasonable point to complain that people don’t move as easily from industry to academia and back as they used too, but it needs to be recognised that this is because the two have drifted apart; with only a few exceptions, the short term focus of industry – and the high pressure to publish on academics – makes this mobility more difficult. From this perspective, one question we should ask about our institutional landscape, is whether it is the right one to allow the people in the system to flourish and fulfil their potential?

    We shouldn’t just ask in what kind of institutions research is done, but also where those are institutions situated geographically. The document contains a section on “Levelling Up and innovation across the UK”, reasserting as a goal that “we need to ensure more places in the UK host world-leading and globally connected innovation clusters, creating more jobs, growth and productivity in those areas.” In the context of the commitment to increase the R&D intensity of the economy, “we are reviewing how we can increase the proportion of total R&D investment, public and private, outside London, the South East, and East of England.”

    The big news here, though, is that the promised “R&D and Place Strategy” has been postponed and rolled into the forthcoming “Levelling Up” White Paper, expected in the autumn. If this does take the opportunity of considering in a holistic way how investments in transport, R&D, skills and business support can be brought together to bring about material changes in the productivity of cities and regions that currently underperform, that is not a bad thing. I was a member of the advisory group for the R&D and Place strategy, so I won’t dwell further on this issue here, beyond saying that I recognise many of the issues and policy proposals which that body has discussed, so I await the final “Levelling Up” White Paper with interest.

    A strategy does imply some prioritisation, and there are a number of different ways in which one might define priorities. The Coalition Government defined 8 Great Technologies; the 2017 Industrial Strategy was built around “Grand Challenges” and “Sector Deals” covering industrial sectors such as Automotive and Aerospace. The current Innovation Strategy introduces seven “technology families” and a new “Innovation Missions Programme”.

    It’s interesting to compare the new “seven technology families” with the old “eight great technologies”. For some the carry over is fairly direct, albeit with some wording changes reflecting shifting fashions – robotics and autonomous systems becomes robotics and smart machines, energy and its storage becomes energy and environment technologies, advanced materials and nanotechnology becomes advanced materials and manufacturing, synthetic biology becomes engineering biology. At least two of the original 8 Great Technologies always looked more like industry sectors than technologies – satellites and commercial applications of space, and agri-science. Big data and energy-efficient computing has evolved into AI, digital and advanced computing, reflecting a genuine change in the technology landscape. Regenerative medicine looks like it’s out of favour, replaced in the biomedical area by bioinformatics and genomics. Quantum technology became appended to the “8 great” a year or two later, and this is now expanded to electronics, photonics and quantum.

    Interesting thought the shifts in emphasis may be, the key issue is the degree to which these high level priorities are translated into different outcomes in institutions and funding programmes. How, for example, are these priority technology families reflected in advisory structures at the level of UKRI and the research councils? And, most uncomfortable of all, a decision to emphasise some technology families must imply, if it has any real force, a corresponding decision to de-emphasise some others.

    One suspects that organisation through industrial sectors is out of favour in the new world where HM Treasury is in the driving seat; for HMT a focus on sectors is associated with incumbency bias, with newer fast-growing industries systematically under-represented, and producer capture of relevant government departments and agencies, leading to a degree of policy attention that reflects a sector’s lobbying effectiveness rather than its importance to the economy.

    Despite this colder new environment, the ever opportunistic biomedical establishment has managed to rebrand their sector deal as a “Life Sciences Vision”. The sector lens remains important, though, because industrial sectors do face their own individual issues, all the more so at a time of rapid change. Successfully negotiating the transition to electric vehicles represents an existential challenge to the automotive sector, while for the persistently undervalued chemicals sector, withdrawal from the EU regulatory framework – REACH – threatens substantial extra costs and frictions, while the transition to net zero presents both a challenge for this energy intensive industry, and a huge set of new potential markets as the supply chain for new clean-tech industries like batteries is developed.

    One very salutary clarification has emerged as a side-effect of the pandemic. The vaccination programme can be held up as a successful exemplar of an “innovation mission”. This emphasises that a “mission” shouldn’t just be a vague aspiration, but a specific engineering project with a product at the end of it – with a matching social infrastructure developed to ensure that the technology is implemented to deliver the desired societal outcome. Thought of this way, a mission can’t just be about discovery science – it may need the development of new manufacturing capacity, new ICT systems, repurposing of existing infrastructures. Above all, a mission needs to be executed with speed, decisiveness, and a willingness to spend money in more than homeopathic quantities, characteristics that aren’t strongly associated with recent UK administrations.

    What further innovation missions can we expect? It isn’t characterised in these terms, but the project to build a prototype power fusion reactor – the “Spherical Tokamak for Energy Production” – could be thought as another one. By no means guaranteed to succeed, it would be a significant development if it did work, and in the meantime it probably will support the spinning out of a number of potentially important technologies for other applications, such as new materials for extreme environments, and further developments in robotics.

    Who will define future “innovation missions”? The answer seems to be the new National Science and Technology Council, to be chaired by the Prime Minister and run by the government’s Chief Scientific Advisor, Sir Patrick Vallance, given an expanded role and an extra job title – National Technology Adviser. In the words of the Prime Minister, “It will be the job of the new National Science and Technology Council to signal the challenges – perhaps even to specify the breakthroughs required – and we hope that science, both public and commercial, will respond.”

    But here there’s a lot to fill in terms of the mechanisms of how this will work. How will the NSTC make its decisions – who will be informing those discussions? And how will those decisions be transmitted to the wider innovation ecosystem – government departments and their delivery agencies like UKRI, and its component research councils and innovation agency InnovateUK? There is a new system emerging here, but the way it will be wired is as yet far from clear.

    Fighting Climate Change with Food Science

    The false claim that US President Biden’s Climate Change Plan would lead to hamburger rationing has provided a predictably useful attack line for his opponents. But underlying this further manifestation of the polarisation of US politics, there is a real issue – producing the food we eat does produce substantial greenhouse gas emissions, and a disproportionate amount of these emissions come from eating the meat of ruminants like cattle and sheep.

    According to a recent study, US emissions from the food system amount to 5 kg a person a day, and 47% of this comes from red meat. Halving the consumption of animal products by would reduce the USA’s greenhouse gas emissions by about 200 million tonnes of CO2 equivalent, a bit more than 3% of the total value. In the UK, the official Climate Change Committee recommends that red meat consumption should fall by 20% by 2050, as part of the trajectory towards net zero greenhouse gas emissions by 2050, with a 50% decrease necessary if progress isn’t fast enough in other areas. At the upper end of the range possibilities, a complete global adoption of completely animal-free – vegan – diets has been estimated to reduce total global greenhouse gas emissions by 14%.

    The political reaction to the false story about Biden’s climate change plan illustrates why a global adoption of veganism isn’t likely to happen any time soon, whatever its climate and other advantages might be. But we should be trying to reduce meat consumption, and it’s worth asking whether the development of better meat substitutes might be part of the solution. We are already seeing “plant-based” burgers in the supermarkets and fast food outlets, while more futuristically there is excitement about using tissue culture techniques to produce in vitro, artificial or lab-grown meat. Is it possible that we can use technology to keep the pleasure of eating meat while avoiding its downsides?

    I think that simulated meat has huge potential – but that this is more likely to come from the evolution of the currently relatively low-tech meat substitutes rather than the development of complex tissue engineering approaches to cultured meat [1]. As always, economics is going to determine the difference between what’s possible in principle and what is actually likely to happen. But I wonder whether relatively small investments in the food science of making meat substitutes could yield real dividends.

    Why is eating meat important to people? It’s worth distinguishing three reasons. Firstly, meat does provide an excellent source of nutrients (though with potential adverse health effects if eaten to excess). Secondly, It’s a source of sensual pleasure, with a huge accumulated store of knowledge and technique about how to process and cook it to produce the most delicious results. Finally, eating meat is freighted with cultural, religious and historical significance. What kind of meat one’s community eats (or indeed, if it it eats meat at all), when families eat or don’t eat particular meats, all of these have deep historical roots. In many societies access to abundant meat is a potent signifier of prosperity and success, both at the personal and national level. It’s these factors that make calls for people to change their diets so political sensitive to this day.

    So how is it realistic to imagine replacing meat with a synthetic substitute? The first issue is easy – replacing meat with foods of plant origin of equivalent nutritional quality is straightforward. The third issue is much harder – cultural change is difficult, and some obvious ways of eliminating meat run into cultural problems. A well-known vegetarian cookbook of my youth was called “Not just a load of old lentils” – this was a telling, but not entirely successful attempt to counteract an unhelpful stereotype head-on. So perhaps the focus should be on the second issue. If we can produce convincing simulations of meat that satisfy the sensual aspects and fit into the overall cultural preconceptions of what a “proper” meal looks like – in the USA or the UK, burger and fries, or a roast rib of beef – maybe we can meet the cultural issue halfway.

    So what is meat, and how can we reproduce it? Lean meat consists of about 75% water, 20% protein and 3% fat. If it was just a question of reproducing the components, synthetic meat would be easy. An appropriate mixture of, say, wheat protein and pea protein (a mixture is needed to get all the necessary amino acids), some vegetable oil, and some trace minerals and vitamins, dispersed in water would provide all the nutrition that meat does. This would be fairly tasteless, of course – but given the well developed modern science of artificial flavours and aromas, we could fairly easily reproduce a convincing meaty broth.

    But this, of course, misses out the vital importance of texture. Meat has a complex, hierarchical structure, and the experience of eating it reflects the way that structure is broken down in the mouth and the time profile of the flavours and textures it releases. Meat is made from animal muscle tissue, which develops to best serve what that particular muscle needs to do for the animal in its life. The cells in muscle are elongated to make fibres; the fibres bundle together to create the grain that’s familiar when we cut meat, but they also need to incorporate the connective tissue that allows the muscle to exert forces on the animal’s bones, and the blood-carrying vascular system that conveys oxygen and nutrients to the working muscle fibres. All of this influences the properties of the tissue when it becomes meat. The connective tissue is dominated by the protein material collagen, which consists of long molecules tightly bound together in triple helices.

    Muscles that do a lot of work – like the lower leg muscles that make up the beef cuts known as shin or leg – have a lot of connective tissue. These cuts of meat are very tough, but after long cooking at low temperatures the collagen breaks down; the triple helices come apart, and the separated long molecules give a silky texture to the gravy, enhanced by the partial reformation of the helical junctions as it cools. In muscles that do less work – like the underside of the loin that forms the fillet in beef – there is much less connective tissue, and the meat is very tender even without long cooking.

    High temperature grilling creates meaty flavours through a number of complex chemical reactions known as Maillard reactions, which are enhanced in the presence of carbohydrates in the flour and sugar that are used for barbecue marinades. Other flavours are fat soluble, carried in the fat cells characteristic of meat from well-fed animals that develop “marbling” of fat layers in the lean muscle. All of these characteristics are developed in the animal reflecting the life it leads before slaughter, and are developed further after butchering, storage and cooking.

    In “cultured” meat, individual precursor cells derived from an animal are grown in a suitable medium, using a “scaffold” to help the cells organise to form something resembling natural muscle tissue. There a a couple of key technical issues with this. The first is the need to provide the right growth medium for the cells, to provide an energy source, other nutrients, and the growth factors that simulate the chemical communications between cells in whole organisms.

    In the cell culture methods that have been developed for biomedical applications, the starting point for these growth media has been sera extracted from animal sources like cows. These are expensive – and obviously can’t produce an animal free product. Serum free growth media have been developed but are expensive, and optimising, scaling up and reducing the cost of these represent key barriers to be overcome to make “cultured meat” viable.

    The second issue is reproducing the vasculature of real tissue, the network of capillaries that conveys nutrients to the cells. It’s this that makes it much easier to grow a thin layer of cells than to make a thick, steak-like piece. Hence current proofs of principle of cultured meat are more likely to produce mince meat for burgers rather than whole cuts.

    I think there is a more fundamental problem in making the transition from cells, to tissue, to meat. One can make a three dimensional array of cells using a “scaffold” – a network of some kind of biopolymer that the cells can attach to and which guides their growth in the way that a surface does in a thin layer. But we know that the growth of cells is influenced strongly by the mechanical stimuli they are exposed to. This is obvious at the macroscopic scale – muscles that do more work, like leg muscles, grow in a different way that ones that do less – hence the difference between shin of beef and fillet steak. I find it difficult to see how, at scale, one could reproduce these effects in cell culture in a way that produces something that looks more like a textured piece of meat rather than a vaguely meaty mush.

    I think there is a simpler approach, which builds on the existing plant-based substitutes for meat already available in the supermarket. Start with a careful study of the hierarchical structures of various meats, at scales from the micron to the millimetre, before and after cooking. Isolate the key factors in the structure that produce a particular hedonic response – e.g. the size and dispersion of the fat particles, and their physical state; the arrangement of protein fibres, the disposition of tougher fibres of connective tissue, the viscoelastic properties of the liquid matrix and so on. Simulate these structures using plant derived materials – proteins, fats, gels with different viscoelastic properties to simulate connective tissue, and appropriate liquid matrices, devising processing routes that use physical processes like gelation and phase separation to yield the right hierarchical structure in a scalable way. Incorporate synthetic flavours and aromas in controlled release systems localised in different parts of the structure. All this is a development and refinement of existing food technology.

    At the moment, attempting something like this, we have start-ups like Impossible Burger and Beyond Meat, with new ideas and some distinct intellectual property. There are established food multinationals, like Unilever, moving in with their depth of experience in branding, distribution and deep food science expertise. We already have products, many of which are quite acceptable in the limited market niches they are aiming at (typically minced meat for burgers and sauces). We need to move now to higher value and more sophisticated products, closer to whole cuts of meat. To do this we need some more basic food science research, drawing on the wide academic base in the life sciences, and integrating this with the chemical engineering for making soft matter systems with complex heterogenous structures at scale, often by non-equilibrium self-assembly processes.

    Food science is currently rather an unfashionable area, with little funding and few institutions focusing on it (for example, the UK’s former national Institute of Food Research in Norwich has pivoted away from classical food science to study the effect of the microbiome on human health). But I think the case for doing this is compelling. The strong recent rise in veganism and vegetarianism creates a large and growing market. But it does need public investment, because I don’t think intellectual property in this area will be very easy to defend. For this reason, large R&D investments by individual companies alone may be difficult to justify. Instead we need consortia bringing together multinationals like Unilever and players further downstream in the supply chain, like the manufacturers of ready meals and suppliers to fast food outlets, together with a relatively modest increase in public sector applied research. Food science may not be as glamorous as a new approach to nuclear fusion, but maybe turn out to be just as important in the fight against climate change.

    [1]. See also this interesting article by Alex Smith and Saloni Shah – The Government Needs an Innovation Policy for Alternative Meats – which makes the case for an industrial strategy for alternative meats, but is more optimistic about the prospects for cell culture than I am.