Fighting Climate Change with Food Science

The false claim that US President Biden’s Climate Change Plan would lead to hamburger rationing has provided a predictably useful attack line for his opponents. But underlying this further manifestation of the polarisation of US politics, there is a real issue – producing the food we eat does produce substantial greenhouse gas emissions, and a disproportionate amount of these emissions come from eating the meat of ruminants like cattle and sheep.

According to a recent study, US emissions from the food system amount to 5 kg a person a day, and 47% of this comes from red meat. Halving the consumption of animal products by would reduce the USA’s greenhouse gas emissions by about 200 million tonnes of CO2 equivalent, a bit more than 3% of the total value. In the UK, the official Climate Change Committee recommends that red meat consumption should fall by 20% by 2050, as part of the trajectory towards net zero greenhouse gas emissions by 2050, with a 50% decrease necessary if progress isn’t fast enough in other areas. At the upper end of the range possibilities, a complete global adoption of completely animal-free – vegan – diets has been estimated to reduce total global greenhouse gas emissions by 14%.

The political reaction to the false story about Biden’s climate change plan illustrates why a global adoption of veganism isn’t likely to happen any time soon, whatever its climate and other advantages might be. But we should be trying to reduce meat consumption, and it’s worth asking whether the development of better meat substitutes might be part of the solution. We are already seeing “plant-based” burgers in the supermarkets and fast food outlets, while more futuristically there is excitement about using tissue culture techniques to produce in vitro, artificial or lab-grown meat. Is it possible that we can use technology to keep the pleasure of eating meat while avoiding its downsides?

I think that simulated meat has huge potential – but that this is more likely to come from the evolution of the currently relatively low-tech meat substitutes rather than the development of complex tissue engineering approaches to cultured meat [1]. As always, economics is going to determine the difference between what’s possible in principle and what is actually likely to happen. But I wonder whether relatively small investments in the food science of making meat substitutes could yield real dividends.

Why is eating meat important to people? It’s worth distinguishing three reasons. Firstly, meat does provide an excellent source of nutrients (though with potential adverse health effects if eaten to excess). Secondly, It’s a source of sensual pleasure, with a huge accumulated store of knowledge and technique about how to process and cook it to produce the most delicious results. Finally, eating meat is freighted with cultural, religious and historical significance. What kind of meat one’s community eats (or indeed, if it it eats meat at all), when families eat or don’t eat particular meats, all of these have deep historical roots. In many societies access to abundant meat is a potent signifier of prosperity and success, both at the personal and national level. It’s these factors that make calls for people to change their diets so political sensitive to this day.

So how is it realistic to imagine replacing meat with a synthetic substitute? The first issue is easy – replacing meat with foods of plant origin of equivalent nutritional quality is straightforward. The third issue is much harder – cultural change is difficult, and some obvious ways of eliminating meat run into cultural problems. A well-known vegetarian cookbook of my youth was called “Not just a load of old lentils” – this was a telling, but not entirely successful attempt to counteract an unhelpful stereotype head-on. So perhaps the focus should be on the second issue. If we can produce convincing simulations of meat that satisfy the sensual aspects and fit into the overall cultural preconceptions of what a “proper” meal looks like – in the USA or the UK, burger and fries, or a roast rib of beef – maybe we can meet the cultural issue halfway.

So what is meat, and how can we reproduce it? Lean meat consists of about 75% water, 20% protein and 3% fat. If it was just a question of reproducing the components, synthetic meat would be easy. An appropriate mixture of, say, wheat protein and pea protein (a mixture is needed to get all the necessary amino acids), some vegetable oil, and some trace minerals and vitamins, dispersed in water would provide all the nutrition that meat does. This would be fairly tasteless, of course – but given the well developed modern science of artificial flavours and aromas, we could fairly easily reproduce a convincing meaty broth.

But this, of course, misses out the vital importance of texture. Meat has a complex, hierarchical structure, and the experience of eating it reflects the way that structure is broken down in the mouth and the time profile of the flavours and textures it releases. Meat is made from animal muscle tissue, which develops to best serve what that particular muscle needs to do for the animal in its life. The cells in muscle are elongated to make fibres; the fibres bundle together to create the grain that’s familiar when we cut meat, but they also need to incorporate the connective tissue that allows the muscle to exert forces on the animal’s bones, and the blood-carrying vascular system that conveys oxygen and nutrients to the working muscle fibres. All of this influences the properties of the tissue when it becomes meat. The connective tissue is dominated by the protein material collagen, which consists of long molecules tightly bound together in triple helices.

Muscles that do a lot of work – like the lower leg muscles that make up the beef cuts known as shin or leg – have a lot of connective tissue. These cuts of meat are very tough, but after long cooking at low temperatures the collagen breaks down; the triple helices come apart, and the separated long molecules give a silky texture to the gravy, enhanced by the partial reformation of the helical junctions as it cools. In muscles that do less work – like the underside of the loin that forms the fillet in beef – there is much less connective tissue, and the meat is very tender even without long cooking.

High temperature grilling creates meaty flavours through a number of complex chemical reactions known as Maillard reactions, which are enhanced in the presence of carbohydrates in the flour and sugar that are used for barbecue marinades. Other flavours are fat soluble, carried in the fat cells characteristic of meat from well-fed animals that develop “marbling” of fat layers in the lean muscle. All of these characteristics are developed in the animal reflecting the life it leads before slaughter, and are developed further after butchering, storage and cooking.

In “cultured” meat, individual precursor cells derived from an animal are grown in a suitable medium, using a “scaffold” to help the cells organise to form something resembling natural muscle tissue. There a a couple of key technical issues with this. The first is the need to provide the right growth medium for the cells, to provide an energy source, other nutrients, and the growth factors that simulate the chemical communications between cells in whole organisms.

In the cell culture methods that have been developed for biomedical applications, the starting point for these growth media has been sera extracted from animal sources like cows. These are expensive – and obviously can’t produce an animal free product. Serum free growth media have been developed but are expensive, and optimising, scaling up and reducing the cost of these represent key barriers to be overcome to make “cultured meat” viable.

The second issue is reproducing the vasculature of real tissue, the network of capillaries that conveys nutrients to the cells. It’s this that makes it much easier to grow a thin layer of cells than to make a thick, steak-like piece. Hence current proofs of principle of cultured meat are more likely to produce mince meat for burgers rather than whole cuts.

I think there is a more fundamental problem in making the transition from cells, to tissue, to meat. One can make a three dimensional array of cells using a “scaffold” – a network of some kind of biopolymer that the cells can attach to and which guides their growth in the way that a surface does in a thin layer. But we know that the growth of cells is influenced strongly by the mechanical stimuli they are exposed to. This is obvious at the macroscopic scale – muscles that do more work, like leg muscles, grow in a different way that ones that do less – hence the difference between shin of beef and fillet steak. I find it difficult to see how, at scale, one could reproduce these effects in cell culture in a way that produces something that looks more like a textured piece of meat rather than a vaguely meaty mush.

I think there is a simpler approach, which builds on the existing plant-based substitutes for meat already available in the supermarket. Start with a careful study of the hierarchical structures of various meats, at scales from the micron to the millimetre, before and after cooking. Isolate the key factors in the structure that produce a particular hedonic response – e.g. the size and dispersion of the fat particles, and their physical state; the arrangement of protein fibres, the disposition of tougher fibres of connective tissue, the viscoelastic properties of the liquid matrix and so on. Simulate these structures using plant derived materials – proteins, fats, gels with different viscoelastic properties to simulate connective tissue, and appropriate liquid matrices, devising processing routes that use physical processes like gelation and phase separation to yield the right hierarchical structure in a scalable way. Incorporate synthetic flavours and aromas in controlled release systems localised in different parts of the structure. All this is a development and refinement of existing food technology.

At the moment, attempting something like this, we have start-ups like Impossible Burger and Beyond Meat, with new ideas and some distinct intellectual property. There are established food multinationals, like Unilever, moving in with their depth of experience in branding, distribution and deep food science expertise. We already have products, many of which are quite acceptable in the limited market niches they are aiming at (typically minced meat for burgers and sauces). We need to move now to higher value and more sophisticated products, closer to whole cuts of meat. To do this we need some more basic food science research, drawing on the wide academic base in the life sciences, and integrating this with the chemical engineering for making soft matter systems with complex heterogenous structures at scale, often by non-equilibrium self-assembly processes.

Food science is currently rather an unfashionable area, with little funding and few institutions focusing on it (for example, the UK’s former national Institute of Food Research in Norwich has pivoted away from classical food science to study the effect of the microbiome on human health). But I think the case for doing this is compelling. The strong recent rise in veganism and vegetarianism creates a large and growing market. But it does need public investment, because I don’t think intellectual property in this area will be very easy to defend. For this reason, large R&D investments by individual companies alone may be difficult to justify. Instead we need consortia bringing together multinationals like Unilever and players further downstream in the supply chain, like the manufacturers of ready meals and suppliers to fast food outlets, together with a relatively modest increase in public sector applied research. Food science may not be as glamorous as a new approach to nuclear fusion, but maybe turn out to be just as important in the fight against climate change.

[1]. See also this interesting article by Alex Smith and Saloni Shah – The Government Needs an Innovation Policy for Alternative Meats – which makes the case for an industrial strategy for alternative meats, but is more optimistic about the prospects for cell culture than I am.

The Prime Minister’s office asserts control over UK science policy

The Daily Telegraph published a significant article from the Prime Minister about science and technology this morning, to accompany a government announcement “Prime Minister sets out plans to realise and maximise the opportunities of scientific and technological breakthroughs”.

Here are a few key points I’ve taken away from these pieces.

1. There’s a reassertion in the PM’s article of the ambition to raise government spending on science from its current value of £14.9 billion to a new target of £22 bn (though no date is attached to this target), together with recognition that this needs to lever in substantially more private sector R&D spending to meet the overall target of the goal of total R&D spending – public and private – of 2.4% of GDP. The £22bn spending goal was promised in the March 2020 budget, but had since disappeared from HMT documents.

2. But there’s a strong signal that this spending will be directed to support state priorities: “It is also the moment to abandon any notion that Government can be strategically indifferent”.

3. A new committee, chaired by the Prime Minister, will be set up – the National Science and Technology Council. This will establish those state priorities: “signalling the challenges – perhaps even to specify the breakthroughs required”. This could be something like the ministerial committee recommended in the Nurse Review, which it was proposed would coordinate the government’s response to science and technology challenges right across government.

4. There is an expanded role for the Government Chief Scientific Advisor, Sir Patrick Vallance, as National Technology Advisor, in effect leading the National Science and Technology Council.

5. A new Office for Science and Technology Strategy is established to support the NSTC. This is based in the Cabinet Office – emphasising its whole-of-government remit. Presumably this supersedes, and/or incorporates, the existing Government Office of Science, which is now based in BEIS.

6. There is a welcome recognition of some of the current weaknesses of the UK’s science and innovation – the article talks about restoring Britain’s status as a science superpower” (my emphasis), after decades of failure to invest, both by the state and by British industry: “this country has failed for decades to invest enough in scientific research, and that strategic error has been compounded by the decisions of the UK private sector”. The article highlights the UK’s loss of capacity in areas like vaccine manufacture and telecoms.

7. The role of the new funding agency ARIA is defined as looking for “Unknown unknowns”, while NSTC sets out priorities supporting missions like net zero, cyber threats and medical issues like dementia. There is no mention of the UK’s current main funder of upstream research – UKRI – but presumably its role is to direct the more upstream science base to support the missions as defined by NSTC.

8. The role of science and technology in creating economic growth remains important, with an emphasis on scientifically led start-ups and scale-ups, and a reference to “Levelling up” by spreading technology led economic growth outside the Golden Triangle to the whole country.

As always, the effectiveness with which a reorganised structure delivers meaningful results will depend on funding decisions made in the Autumn’s spending review – and thus the degree to which HM Treasury is convinced by the arguments of the NSTC, or compelled by the PM to accept them.

Rubber City Rebels

I’m currently teaching a course on the theory of what makes rubber elastic to Material Science students at Manchester, and this has reminded me of two things. The first is that this a great topic to introduce a number of the most central concepts of polymer physics – the importance of configurational entropy, the universality of the large scale statistical properties of macromolecules, the role of entanglements. The second is that the city of Manchester has played a recurring role of the history of the development of this bit of science, which as always, interacts with technological development in interesting and complex ways.

One of the earliest quantitative studies of the mechanical properties of rubber was published by that great Manchester physicist, James Joule, in 1859. As part of his investigations of the relationship between heat and mechanical work, he measured the temperature change that occurs when rubber is stretched. As anyone can find out for themselves with a simple experiment, rubber is an unusual material in this respect. If you take an elastic band (or, better, a rubber balloon folded into a narrow strip), hold it close to your upper lip, suddenly stretch it and then put it to your lip, you can feel that it significantly heats up – and then, if you release the tension again, it cools down again. This is a crucial observation for understanding how it is that the elasticity of rubber arises from the reduction in entropy that occurs when a randomly coiled polymer strand is stretched.

But this wasn’t the first observation of the effect – Joule himself referred to an 1805 article by John Gough, in the Memoirs of the Manchester Literary and Philosophical Society, drawing attention to this property of natural rubber, and the related property that a strand of the material held under tension would contract on being heated. John Gough himself was a fascinating figure – a Quaker from Kendal, a town on the edge of England’s Lake District, blind, as a result of a childhood illness, he made a living as a mathematics tutor, and was a friend of John Dalton, the Manchester based pioneer of the atomic hypothesis. All of this is a reminder of the intellectual vitality of that time in the fast industrialising provinces, truly an “age of improvement”, while the universities of Oxford and Cambridge had slipped into the torpor of qualifying the dim younger offspring of the upper classes to become Anglican clergymen.

Joule’s experiments were remarkably precise, but there was another important difference from Gough’s pioneering observation. Joule was able to use a much improved version of the raw natural rubber (or caoutchouc) that Gough used; the recently invented process of vulcanisation produced a much stronger, stabler material than the rather gooey natural precursor. The original discovery of the process of vulcanisation was made by the self-taught American inventor Charles Goodyear, who found in 1839 that rubber could be transformed by being heated with sulphur. It wasn’t for nearly another century that the chemical basis of this process was understood – the sulphur creates chemical bridges between the long polymer molecules, forming a covalently bound network. Goodyear’s process was rediscovered – or possibly reverse engineered – by the industrialist Thomas Hancock, who obtained the English patents for it in 1843 [2].

Appropriately for Manchester, the market that Hancock was serving was for improved raincoats. The Scottish industrialist Mackintosh had created his eponymous garment from a waterproof fabric consisting of a sandwich of rubber between two textile sheets; Hancock meanwhile had developed a number of machines and technologies for processing natural rubber, so it was natural for the two to enter into partnership with their Manchester factory making waterproof fabric. Their firm prospered; Goodyear, though, failed to make money from his invention and died in poverty (the Goodyear tire company was named after him, but only some years after his death).

At that time, rubber was a product of the Amazonian rain forest, harvested from wild trees by indigenous people. In a well known story of colonial adventurism, 70,000 seeds of the rubber tree were smuggled out of Brazil by the explorer Henry Wickham, successfully cultivated at Kew Gardens, with the plants exported to the British colonies of Malaya and Ceylon to form the basis of a new plantation rubber industry. This expansion and industrialisation of the cultivation of rubber came at an opportune time – the invention of the pneumatic tyre and the development of the automobile industry led to a huge new demand for rubber around the turn of the century, which the new plantations were in a position to meet.

Wild rubber was also being harvested to meet this time in the Belgian Congo, involving an atrocious level of violent exploitation of the indigenous population by the colonisers. But most of the rubber being produced to meet the new demand came from the British Empire plantations; this cultivation may not have been accompanied by the atrocities committed in the Congo, but the competitive prices plantation rubber could be produced at reflected not just the capital invested and high productivity achieved, but also the barely subsistence wages paid to the workforce, imported from India and China.

Back in England, in 1892 the Birmingham based chemist William Tilden had demonstrated that rubber could be synthesised from turpentine [3]. But this invention created little practical interest in England. And why would it, given that the natural product is of a very high quality, and the British Empire had successfully secured ample supplies through its colonial plantations? The process was rediscovered by the Russian chemist Kondakov in 1901, and taken up by the German chemical company Bayer in time for the synthetic product to play a role in the First World War, when German access to plantation rubber was blocked by the allies. At this time the quality of the synthetic product was much worse than that of natural rubber; nonetheless German efforts to improve synthetic rubber continued in the 1920’s and 30’s, with important consequences in the Second World War.

It’s sobering[4] to realise that by 1919, the rubber industry constituted a global industry with an estimated value of £250 million (perhaps £12 billion in today’s money), on the cusp of a further massive expansion driven by the mass adoption of the automobile – and yet scientists were completely ignorant, not just of the molecular origins of rubber’s elasticity, but even of the very nature of its constituent molecules. It was the German chemist Hermann Staudinger who, in 1920, suggested that rubber was composed of very long, linear molecules – polymers. Obvious thought this may be now, this was a controversial suggestion at the time, creating bitter disputes in the community of German chemists at the time, a dispute that gained a political tinge with the rise of the Nazi regime. Staudinger remained in Germany throughout the Second World War, despite being regarded as deeply ideologically suspect.

Staudinger was right about rubber being made up of long-chain molecules, but he was wrong about the form those molecules would take, believing that they would naturally adopt the form of rigid rods. The Austrian scientist Herman Mark, who was working for the German chemical combine IG Farben on synthetic rubber and other early polymers, realised that these long molecules would be very flexible and take up a random coil conformation. Mark’s father was Jewish, so he left IG Farben, first for Austria, and then after the Anschluss he escaped to Canada. At the University of Vienna in the 1930’s, Mark developed, with Eugene Guth, the statistical theory that explains the elastic behaviour of rubber in terms of the entropy changes in the chains as they are stretched and unstretched. This, at last, provided the basic explanation for the effect Gough discovered more than a century before, and that Joule quantified – the rise of temperature that occurs when rubber is stretched.

By the start of the Second World War, both Mark and Guth found themselves in the USA, where the study of rubber was suddenly to become very strategically important indeed. The entry of Japan into the war and the fall of British Malaya cut off allied supplies of natural rubber, leading to a massive scale up of synthetic rubber production. Somewhat ironically, this was based on the pre-war discovery by IG Farben of a version of synthetic rubber that had a great improvement in properties on previous versions – styrene-butadiene rubber (Buna-S). Standard Oil of New Jersey had an agreement with IG Farben to codevelop and market Buna-S in the USA.

The creation, almost from scratch, of a massive synthetic rubber industry in the USA was, of course, just one dimension of the USA’s World War 2 production miracle, but its scale is still astonishing [5]. The industry scaled up, under government direction, from producing 231 tons of general purpose rubber in 1941, to a monthly output of 70,000 tons in 1945. 51 new plants were built to produce the massive amounts of rubber needed for aircraft, tanks, trucks and warships. The programme was backed up by an intensive R&D effort, involving Mark, Guth, Paul Flory (later to win the Nobel prize for chemistry for his work on polymer science) and many others.

There was no significant synthetic rubber programme in the UK in the 1920’s and 1930’s. The British Empire was at its widest extent, providing ample supplies of natural rubber, as well as new potential markets for the material. That didn’t mean that there was no interest in improving scientific understanding of the material – on the contrary, the rubber producers in Malaya first sponsored research in Cambridge and Imperial, then collectively created a research laboratory in England, led by a young physical chemist from near Manchester, Geoffrey Gee. Gee, together with Leslie Treloar, applied the new understanding of polymer physics to understand and control the properties of natural rubber. After the war, realising that synthetic rubber was no longer just an inferior substitute, but a major threat to the markets for natural rubber, Gee introduced a programme of standardisation of rubber grades which helped the natural product maintain its market position.

Gee moved to the University of Manchester in 1953, and some time later Treloar moved to the neighbouring institution, UMIST, where he wrote the classic textbook on rubber elasticity. Manchester in the 1950’s and 60’s was a centre of research into rubber and networks of all kinds. Perhaps the most significant new developments were made in theory, by Sam Edwards, who joined Manchester’s physics department in 1958. Edwards was a brilliant theoretical physicist, who had learnt the techniques of quantum field theory with Julian Schwinger in a postdoc at Harvard. Edwards, having been interested by Gee in the fundamental problems of polymer physics, realised that there are some deep analogies between the mathematics of polymer chains and the quantum mechanical description of the behaviour of electrons. He was able to rederive, in a much more rigorous way that demonstrated the universality of the results, some of the fundamental predictions of polymer physics that had been postulated by Flory, Mark, Guth and others, before going onto results of his own of great originality and importance.

Edwards’s biggest contribution to the theory of rubber elasticity was to introduce methods for dealing with the topological constraints that occur in dense, cross-linked systems of linear chains. Polymer chains are physical objects that can’t cross each other, something that the classical theories of Guth and Mark completely neglect. But it was by then obvious that the entanglements of polymer molecules could themselves behave as cross-links, even in the absence of the chemical cross linking of vulcanisation (in fact, this is already suggested looking back at Gough’s original 1805 observations, which were made on raw, unvulcanised, rubber). Edwards introduced the idea of a “tube” to represent those topological constraints. Combined with the insight of the French physicist Pierre-Gilles de Gennes, this led not just to improved models for rubber elasticity taking account of entanglements, but a complete molecular theory of the complex viscoelastic behaviour of polymer melts [6].

Another leading physicist who emerged from this Manchester school was Julia Higgins, who learnt about polymers while she was a research fellow in the chemistry department in the 1960’s. Higgins subsequently worked in Paris, where in 1974 she carried out, with Cotton, des Cloiseux, Benoit and others, what I think might be one of the most important single experiments in polymer science. Using a neutron source to study the scattering from a melt of polymer molecules, some of which were deuterium labelled, they were able to show that even in the dense, entangled environment of a polymer melt, a single polymer chain still behaves as a classical random walk. This is in contrast with the behaviour of polymers in solution, where the chains are expanded by a so-called “excluded volume” interaction – arising from the fact that two segments of a single polymer chain can’t be in the same place at the same time. This result had been anticipated by Flory, in a rather intuitive and non-rigorous way, but it was Edwards who proved this result rigorously.

[1] My apologies for the rather contrived title. No-one calls Manchester “Rubber City” – it is traditionally a city built on cotton. The true Rubber City is, of course, Akron Ohio. Neither can anyone really describe any of the figures I talk about here as “rebels” (with the possible exception of Staudinger, who in his way is rather a heroic figure). But as everyone knows [7], Akron was a centre of music creativity in the mid-to-late 1970s, producing bands such as Devo, Per Ubu, and the Rubber City Rebels, whose eponymous song has remained a persistent earworm for me since the late 1970’s, and from which I’ve taken my title.
[2] And I do mean “English” here, rather than British or UK – it seems that Scotland had its own patent laws then, which, it turns out, influenced the subsequent development of the rubber boot industry.
[3] It’s usually stated that Tilden succeeded in polymerising isoprene, but a more recent reanalysis of the original sample of synthetic rubber has revealed that it is actually poly(2,3-dimethybutadiene) (https://www.sciencedirect.com/science/article/pii/S0032386197000840)
[4] At least, it’s sobering for scientists like me, who tend to overestimate the importance of having a scientific understanding to make a technology work.
[5] See “U.S. Synthetic Rubber Program: National Historic Chemical Landmark” – https://www.acs.org/content/acs/en/education/whatischemistry/landmarks/syntheticrubber.html
[6] de Gennes won the 1991 Nobel Prize for Physics for his work on polymers and liquid crystals. Many people, including me, strongly believed that this prize should have been shared with Sam Edwards. It has to be said that both men, who were friends and collaborators, dealt with this situation with great grace.
[7] “Everyone” here meaning those people (like me) born between 1958 and 1962 who spent too much of their teenage years listening to the John Peel show.

How does the UK rank as a knowledge economy?

Now the UK has withdrawn from the European single market, it will need to rethink its current and potential future position in the world economy. Some helpful context is provided, perhaps, by some statistics summarising the value added from knowledge and technology intensive industries, taken from the latest edition of the USA’s National Science Board Science and Engineering Indicators 2020.

The plot shows the changing share of world value added in a set of knowledge & technology intensive industries, as defined by an OECD industry classification based on R&D intensity. This includes five high R&D intensive industries: aircraft; computer, electronic, and optical products; pharmaceuticals; scientific R&D services; and software publishing. It also includes eight medium-high R&D intensive industries: chemicals (excluding pharmaceuticals); electrical equipment; information technology (IT) services; machinery and equipment; medical and dental instruments; motor vehicles; railroad and other transportation; and weapons. It’s worth noting that, in addition to high value manufacturing sectors, it includes some knowledge intensive services. But it does exclude public knowledge intensive services in education and health care, and, in the private sector, financial services and those business services outside R&D and IT services.

From this plot we can see that the UK is a small but not completely negligible part of world advanced economy. This is perhaps a useful perspective from which to view some of the current talk of world-beating “global Britain”. The big story is the huge rise of China, and in this context, inevitable that the rest of the world’s share of the advanced economy has fallen. But the UK’s fall is larger than competitors (-46%, cf -19% for the USA and -13% for rest of EU).

The absolute share tells us about the UK’s overall relative importance in the world economy, and should be helpful in stressing the need, in developing industrial strategy, for some focus. Another perspective is provided if we normalise the figures by population, which give us a sense of the knowledge intensity of the economy, which might give a pointer to prospects for future productivity growth. The table shows a rank ordered list by country of value added in knowledge & technology intensive industries per head of population in 2002 and 2018. The values for Ireland & possibly Switzerland may be distorted by transfer pricing effects.

Measuring up the UK Government’s ten-point plan for a green industrial revolution

Last week saw a major series of announcements from the government about how they intend to set the UK on the path to net zero greenhouse gas emissions. The plans were trailed in an article (£) by the Prime Minister in the Financial Times, with a full document published the next day – The ten point plan for a green industrial revolution. “We will use Britain’s powers of invention to repair the pandemic’s damage and fight climate change”, the PM says, framing the intervention as an innovation-driven industrial strategy for post-covid recovery. The proposals are patchy, insufficient by themselves – but we should still welcome them as beginning to recognise the scale of the challenge. There is a welcome understanding that decarbonising the power sector is not enough by itself. The importance of emissions from transport, industry and domestic heating are all recognised, and there is a nod to the potential for land-use changes to play a significant role. The new timescale for the phase-out of petrol and diesel cars is really significant, if it can be made to stick. So although I don’t think the measures yet go far enough or fast enough, one can start to see the outline of what a zero-emission economy might look like.

In outline, the emerging picture seems to be of a power sector dominated by offshore wind, with firm power provided either by nuclear or fossil fuels with carbon capture and storage. Large scale energy storage isn’t mentioned much, though possibly hydrogen could play a role there. Vehicles will predominantly be electrified, and hydrogen will have a role for hard to decarbonise industry, and possibly domestic heating. Some hope is attached to the prospect for more futuristic technologies, including fusion and direct air capture.

To move on to the ten points, we start with a reassertion of the Manifesto commitment to achieve 40 GW of offshore wind installed by 2030. How much is this? At a load factor of 40%, this would produce 140 TWh a year; for comparison, in 2019, we used a total 346 TWh of electricity. Even though this falls a long way short of what’s needed to decarbonise power, a build out of offshore wind on this scale will be demanding – it’s a more than four-fold increase on the 2019 capacity. We won’t be able to expand the capacity of offshore wind indefinitely using current technology – ultimately we will run out of suitable shallow water sites. For this reason, the announcement of a push for floating wind, with a 1 GW capacity target, is important.

On hydrogen, the government is clearly keen, with the PM saying “we will turn water into energy with up to £500m of investment in hydrogen”. Of course, even this government’s majority of 80 isn’t enough to repeal the laws of thermodynamics; hydrogen can only be an energy store or vector. As I’ve discussed in an earlier post (The role of hydrogen in reaching net zero), hydrogen could have an important role in a low carbon energy system, but one needs to be clear about how the hydrogen is made in a zero-carbon way, and how it is used, and this plan doesn’t yet provide that clarity.

The document suggests the first use will be in a natural gas blend for domestic heating, with a hint that it could be used in energy intensive industry clusters. The commitment is to create 5 GW of low carbon hydrogen production capacity by 2030. Is this a lot? Current hydrogen production amounts to 3 GW (27 TWh/year), used in industry and (especially) for making fertiliser, though none of this is low carbon hydrogen – it is made from natural gas by steam methane reforming. So this commitment could amount to building another steam reforming methane plant and capturing the carbon dioxide – this might be helpful for decarbonising industry, on on Deeside or Teeside perhaps. To give a sense of scale, total natural gas consumption in industry and homes (not counting electricity generation) equates to 58 GW (512 TWh/year), so this is no more than a pilot. In the longer term, making hydrogen by electrolysis and/or process heat from high temperature fission is more likely to be the scalable and cost-effective solution, and it is good that Sheffield’s excellent ITM Power gets a namecheck.

On nuclear power, the paper does lay out a strategy, but is light on the details of how this will be executed. For more detail on what I think has gone wrong with the UK’s nuclear strategy, and what I think should be done, see my earlier blogpost: Rebooting the UK’s nuclear new build programme. The plan here seems to be for one last heave on the UK’s troubled programme of large scale nuclear new build, followed up by a possible programme implementing a light water small modular reactor, with research on a new generation of small, high temperature, fourth generation reactors – advanced modular reactors (AMRs). There is a timeline – large-scale deployment of small modular reactors in the 2030’s, together with a demonstrator AMR around the same timescale. I think this would be realistic if there was a wholehearted push to make it happen, but all that is promised here is a research programme, at the level of £215 m for SMRs and £170m for AMRs, together with some money for developing the regulatory and supply chain aspects. This keeps the programme alive, but hardly supercharges it. The government must come up with the financial commitments needed to start building.

The most far-reaching announcement here is in the transport section – a ban on sales of new diesel and petrol car sales after 2030, with hybrids being permitted until 2035, after which only fully battery electric vehicles will be on sale. This is a big deal – a major effort will be required to create the charging infrastructure (£1.3 bn is ear-marked for this), and there will need to be potentially unpopular decisions on tax or road charging to replace the revenue from fuel tax. For heavy goods vehicles the suggestion is that we’ll have hydrogen vehicles, but all that is promised is R&D.

For public transport the solutions are fairly obvious – zero-emission buses, bikes and trains – but there is a frustrating lack of targets here. Sometimes old technologies are the best – there should be a commitment to electrify all inter-city and suburban lines as fast as feasible, rather than the rather vague statement that “we will further electrify regional and other rail routes”.

In transport, though, it’s aviation that is the most intractable problem. Three intercontinental trips a year can double an individual’s carbon footprint, but it is very difficult to see how one can do without the energy density of aviation fuel for long-distance flight. The solutions offered look pretty unconvincing to me – “we are investing £15 million into FlyZero – a 12-month study, delivered through the Aerospace Technology Institute (ATI), into the strategic, technical and commercial issues in designing and developing zero-emission aircraft that could enter service in 2030.” Maybe it will be possible to develop an electric aircraft for short-haul flights, but it seems to me that the only way of making long-distance flying zero-carbon is by making synthetic fuels from zero-carbon hydrogen and carbon dioxide from direct air capture.

It’s good to see the attention on the need for greener buildings, but here the government is hampered by indecision – will the future of domestic heating be hydrogen boilers or electric powered heat pumps? The strategy seems to be to back both horses. But arguably, even more important than the way buildings are heated is to make sure they are as energy-efficient as possible in the first place, and here the government needs to get a grip on the mess that is our current building regulation regime. As the Climate Change Committee says, “making a new home genuinely zero-carbon at the outset is around five times cheaper than retrofitting it later” – the housing people will be living in in 2050 is being built today, so there is no excuse for not ensuring the new houses we need now – not least in the neglected social housing sector – are built to the highest energy efficiency standards.

Carbon capture, usage and storage is the 8th of our 10 points, and there is a commendable willingness to accelerate this long-stalled programme. The goal here is “to capture 10Mt of carbon dioxide a year by 2030”, but without a great deal of clarity about what this is for. The suggestion that the clusters will be in the North East, the Humber, North West, and in Scotland and Wales suggests a goal of decarbonising energy intensive sectors, which in my view is the best use of this problematic technology (see my blogpost: Carbon Capture and Storage: technically possible, but politically and economically a bad idea). What’s the scale proposed here – is 10 Mt of carbon a year a lot or a little? Compared to the total CO2 emissions for the UK – 350 Mt in 2019 – it isn’t much, but on the other hand it is roughly in line with the total emissions of the iron and steel industry in the UK, so as an intervention to reduce the carbon intensity of heavy industry it looks more viable. The unresolved issue is who bears the cost.

There’s a nod to the effects of land-use changes, in the section on protecting the natural environment. There are potentially large gains to be had here in projects to reforest uplands and restore degraded peatlands, but the scale of ambition is relatively small.

Finally, the tenth point concerns innovation, with the promise of a “£1 billion Net Zero Innovation Portfolio” as part of the government’s aspiration to raise the UK’s R&D intensity to 2.4% of GDP by 2027. The R&D is to support the goals in the 10 point plan, with a couple of more futuristic bets – on direct air capture, and on commercial fusion power through the Spherical Tokomak for Energy Production project.

I think R&D and innovation are enormously important in the move to net zero. We urgently need to develop zero-carbon technologies to make them cheaper and deployable at scale. My own somewhat gloomy view (see this post for more on this: The climate crisis now comes down to raw power) is that, taking a global view incorporating the entirely reasonable aspiration of the majority of the world’s population to enjoy the same high energy lifestyle that is to be found in the developed world, the only way we will effect a transition to a zero-carbon economy across the world is if the zero-carbon technologies are cheaper – without subsidies – than fossil fuel energy. If those cheap, zero-carbon technologies can be developed in the UK, that will make a bigger difference to global carbon budgets than any unilateral action that affects the UK alone.

But there is an important counter-view, expressed cogently by David Edgerton in a recent article: Cummings has left behind a No 10 deluded that Britain could be the next Silicon Valley. Edgerton describes a collective credulity in the government about Britain’s place in the world of innovation, which overstates the UK’s ability to develop these new technologies, and underestimates the degree to which the UK will be dependent on innovations developed elsewhere.

Edgerton is right, of course – the UK’s political and commentating classes have failed to take on board the degree to which the country has, since the 1980’s, run down its innovation capacity, particularly in industrial and applied R&D. In energy R&D, according to recent IEA figures, the UK spends about $1.335 billion a year – some 4.3% of the world total, eclipsed by the contributions of the USA, China, the EU and Japan.

Nonetheless, $1.3 billion is not nothing, and in my opinion this figure ought to increase substantially both in absolute terms, and as a fraction of rising public investment in R&D. But the UK will need to focus its efforts in those areas where it has unique advantages; while in other areas international collaboration may be a better way forward.

Where are those areas of unique advantage? One such probably is offshore wind, where the UK’s Atlantic location gives it a lot of sea and a lot of wind. The UK currently accounts for about 1/3 of all offshore wind capacity, so it represents a major market. Unfortunately, the UK has allowed the situation to develop where the prime providers of its offshore wind technology are overseas. The plan suggests more stringent targets for local content, and this does make sense, while there is a strong argument that UK industrial strategy should try and ensure that more of the value of the new technologies of deepwater floating wind are captured in the UK.

While offshore wind is being deployed at scale right now, fusion remains speculative and futuristic. The government’s strategy is to “double down on our ambition to be the first country in the world to commercialise fusion energy technology”. While I think the barriers to developing commercial fusion power – largely in materials science – remain huge, I do believe the UK should continue to fund it, for a number of reasons. Firstly, there is a possibility that it might actually work, in which case it would be transformative – it’s a long odds bet with a big potential payoff. But why should the UK be the country making the bet? My answer would be that, in this field, the UK is genuinely internationally competitive; it hosts the Joint European Torus, and the sponsoring organisation UKAEA retains, rare in UK, capacity for very complex engineering at scale. Even if fusion doesn’t deliver commercial power, the technological spillovers may well be substantial.

The situation in nuclear fission is different. The UK dramatically ran down its research capacity in civil nuclear power, and chose instead to develop a new nuclear build programme on the basis of entirely imported technology. This was initially the French EPR currently being built in Hinkley Point, with another another type of pressurised water reactor, from Toshiba, to be built in Cumbria, and a third type of reactor, a boiling water reactor from Hitachi, in Anglesea. That hasn’t worked out so well, with only the EPRs now looking likely to be built. The current strategy envisages a reset, with a new programme of light water small modular reactors – that is to say, a technologically conservative PWR designed with an emphasis on driving its capital cost down, followed by work on a next generation fission reactor. These “advanced modular reactors” would be relatively small high temperature reactor. The logic for the UK to be the country to develop this technology is that it is only country that has run an extensive programme of gas cooled reactors, but it still probably needs collaboration with other like-minded countries.

How much emphasis should the UK put into developing electric vehicles, as opposed to simply creating the infrastructure for them and importing the technology? The automotive sector still remains an important source of added value for the UK, having made an impressive recovery from its doldrums in the 90’s and 00’s. Jaguar Land Rover, though owned by the Indian conglomerate Tata, is still essentially a UK based company, and it has an ambitious development programme for electric vehicles. But even with its R&D budget of £1.8 bn a year, it is a relative minnow by world standards (Volkswagen’s R&D budget is €13bn, and Toyota’s only a little less); for this reason it is developing a partnership with BMW. The government should support the UK industry’s drive to electrify, but care will be needed to identify where UK industry can find the most value in global supply chains.

A “green industrial strategy” is often sold on the basis of the new jobs it will create. It will indeed create more jobs, but this is not necessarily a good thing. If it takes more people, more capital, more money to produce the same level of energy services – houses being heated, iron being smelted, miles driven in cars and lorries – then that amounts to a loss of productivity across the economy as a whole. Of course this is justified by the huge costs that burning fossil fuels impose on the world as a whole through climate change, costs which are currently not properly accounted for. But we shouldn’t delude ourselves. We use fossil fuels because they are cheap, convenient, and easy to use, and we will miss them – unless we can develop new technologies that supply the same energy services at a lower cost, and that will take innovation. New low carbon energy technologies need to be developed, and existing technologies made cheaper and more effective.

To sum up, the ten point plan is a useful step forward, The contours of a zero-emissions future are starting to emerge, and it is very welcome that the government has overcome its aversion to industrial strategy. But more commitment and more realism is required.

Talking about industrial strategy, “levelling up” and R&D

I’ve done a number of events over the past week on the themes of industrial strategy, “levelling up” and R&D. Here’s a summary of links to the associated videos, transcripts and podcasts.

1. Foundation for Science and Technology event: “The R&D roadmap and levelling up across the UK”. 7 October 2020.

An online seminar with me, the UK Science Minister, Amanda Solloway MP, and the Welsh Government Minister for the Economy, Transport and North Wales,Ken Skates MS.
Transcripts & YouTube video can be found here.

An associated Podcast“>podcast of an interview with me is here.

2. Oral evidence to House of Commons Science Select Committee on “A New UK Research Agency modelled on ARPA”, 7 October 2020

An evidence session with myself and Mariana Mazzucato (Professor in the Economics of Innovation & Public Value at UCL):
transcripts;
Video.

3. Seminar for Tony Blair Institute for Global Change, 9 October 2020: “UK Industrial Strategy’s three horsemen: COVID, Brexit and trade wars”

An online seminar featuring myself, the economist Dame Kate Barker, and Anand Menon (Director of UK in a changing Europe at Kings College London)
YouTube Video

Give the UK’s nations and regions the tools they need to prosper

This piece is based on talks I’ve given to present some of the arguments of the paper Tom Forth and I have just published with NESTA. The full paper is available here: The Missing £4 Billion: Making R&D work for the whole UK.

The UK is two countries, economically. In terms of productivity, “Greater South East England” – London, the South East and some of the East of England – is a country with a level of productivity comparable to richest parts of the rest of Northern Europe. But much of the rest of the UK – including the Midlands, the North, much of the Southwest of England, together with Wales and Northern Ireland – is more comparable to East Germany and Southern Italy in its productivity

The differences aren’t quite as stark when we look at living standards, because the UK runs an effective transfer union, where money generated in London and the South East is used to run the public services in the rest of the country. In terms of the balance between the tax and other revenues generated, and current government expenditure, only three regions of the UK put in more than they take out – the highly productive regions of London, the South East and the East of England.

The argument about “levelling up” economic performance across the country is often presented in terms of fairness. But we would have a fairer country if the Greater South East could keep more of the money it generates, while the rest of the country was able to pay its own way. A less economically unbalanced country would be both fairer and more prosperous.

But while the current expenditures of the less productive parts of the country are heavily subsidised by the greater South East, the opposite is the case for those types of investments that would enhance the productivity of the economically lagging regions. For investments like research and development, we spend the most money in exactly those regions that are already the most prosperous and productive. In effect, for many decades, we have been operating an anti-regional policy.

Currently, the regions and subregions containing London, Oxford and Cambridge account for 46 per cent of public and charitable R&D in the UK, with just 21 per cent of the population. Strikingly, public spending on R&D is even more concentrated than private sector spending.

By general agreement, the UK invests too little overall on R&D anyway. The nation’s R&D intensity – total spending on R&D, public and private, as a fraction of GDP – is 1.66 per cent, closer to countries like Italy and Spain than Germany or France, let alone innovation leaders like South Korea, with a total R&D spending of 4.55% of GDP. That’s why it’s welcome that the government has committed to increasing public spending on R&D to £22 billion a year by 2025, to get closer to the OECD average R&D intensity of 2.4%.

How much money would it take to increase R&D spending in the nations and regions to the level in greater South East England? To “level up” per capita investment right across the country would take a bit more than £4 billion a year – £1.6 billion would need to go to the North of England, £1.4 billion to the Midlands, £420 million to Wales, £580 million to South West England and £250 million to Northern Ireland, with spending in Scotland largely unchanged.

These are large numbers. The problem of regional R&D imbalances is a long-standing one, and there’s a tendency among some policy makers to say, “we’ve tried to solve this before and nothing’s worked”. The Regional Development Agencies in England spent about £100 million a year on innovation in the mid-2000’s. This did some useful things but was an order of magnitude too small to make a material difference. We failed in the past because we didn’t really try.

But in the context of a planned increase in R&D spending to £22 billion, given a current 20/21 budget for UKRI (the UK’s single research and innovation agency) of £8.4 billion (itself a substantial increase on earlier years, the necessary increases in the nations and regions are entirely feasible within the planned funding uplift.

Of course, it’s easy to spend money, but more difficult to do this well in a way that maximises the chances that it will lead to better economic outcomes for the whole of the UK, at the same time contributing to the nation’s wider goals. But there are some general guiding principles.

Firstly, we should follow the signals that the market sector gives us. Regions like the English Midlands and North West are characterised by private sector investment in R&D that is disproportionately large compared to the public sector investment. Here there are innovation systems that are strong already, but they need to be supported by public sector investment in the same way as happens in more prosperous Greater South East England. There is a more immediate crisis, here, as well. The impact of Covid-19 on the aerospace and automotive industries is a threat to these innovation systems, and we need to preserve the massive concentrations of know-how in companies like Rolls-Royce and JLR, and their suppliers.

Secondly, where we need to build innovation capacity in those parts of the country which are relatively weak in both public and private sector R&D, we should look to those entirely new industries and clusters we need to build up to meet future challenges. For example, we might want to ask, as we emerge from the current pandemic, whether the life sciences sector we have is right one to meet this kind of public health crisis.

This short term pandemic crisis shouldn’t blind us to the fact we’re immersed in the much longer term crisis of climate change. The government has signed up to a target of net zero greenhouse gas emissions by 2050. This implies a massive transition for our economy, which needs to be underpinned by innovation to make it affordable and achievable. We could be building a new hydrogen economy on Teeside and the Humber, deep sea floating offshore wind in the South West, next generation small modular reactors in Cumbria, all underpinned by research and innovation.

Thirdly, we need to break out of the trap that many of our towns and urban fringes have found themselves in, where low skills, low innovation and low productivity reinforce each other in a bad equilibrium leading to low wages and poor health outcomes. To break this cycle, we need at the same time to raise the demand for skills by attracting inward investment from technologically leading companies and driving up the innovative capacity of the existing business base, and create the supply of skills by a much more joined up approach between further and higher education. The creation of more Advanced Manufacturing Innovation Districts, like the one that’s grown up around the Advanced Manufacturing Research Centre in Rotherham, is one way to do this.

Different places have different problems, so there won’t a single solution. Our major cities outside the greater South East still underperform compared to second tier cities in France or Germany – agglomeration effects are important, but in the UK we don’t seem to be able to capture them fully. These cities need more R&D as part of a wider expansion of high value, knowledge intensive business services. Meanwhile some of the most intractable economic and social problems are to be found in the UK’s coastal and rural fringes – but more R&D probably isn’t the right recipe here. R&D is important, but it’s far from the only tool we have.

The UK’s economic imbalances are long-standing problems, that have been long recognised – and yet little progress has been made towards solving them. The UK’s highly centralised state is part of the problem. At this unique moment, where total R&D investment is planned to increase, we can rebalance R&D across the country without jeopardising the strong innovation systems of the greater South East, which remain a national asset.

A substantial fraction of the planned uplift in R&D spending should be devolved – to the devolved nations, and in England to cities and regions. This isn’t completely straightforward, because of the messy nature of the incomplete English devolution settlement. And it’s a fair comment that many cities and regions don’t yet have the capacity they need to make effective choices about how to spend R&D funds. But these aren’t reasons not to make the changes that are needed; they underline the need to take devolution further and develop that capacity.

To read the whole paper, see: The Missing £4 Billion: Making R&D work for the whole UK.

The Missing £4 billion: making R&D work for the whole UK

Tom Forth and I have a new policy paper out, published by the Innovation Foundation NESTA, called The Missing £4 billion: making R&D work for the whole UK

This was covered by the Financial Times, complete with celebrity endorsement: Academic cited by Cummings wants to redraw map of research spending

Here is the Executive Summary:

The Missing £4 billion: making R&D work for the whole UK

The UK’s regional imbalances in economic performance are exacerbated by regional imbalances in R&D spending

There are two economies in the UK. Much of London, South East England and the East of England has a highly productive, prosperous knowledge-based economy. But in the Midlands and the North of England, in much of South West England and in Wales and Northern Ireland, the economy lags behind our competitors in Northern Europe. Scotland sits in between. In underperforming large cities, in towns that have never recovered from deindustrialisation, in rural and coastal fringes, weak innovation systems are part of the cause of low productivity economies.

The government supports regional innovation systems through its spending on public sector research and development (R&D). This investment is needed now more than ever; we have an immediate economic crisis because of the pandemic, but the long-term problems of the UK economy – a decade of stagnation of productivity growth, which led to stagnant wages and weak government finances, and persistent regional imbalances – remain. Government investment in R&D is highly geographically imbalanced. If the government were to spend at the same intensity in the rest of the country as it does in the wider South East of England, it would spend £4 billion more. This imbalance wastes an opportunity to use public spending to ‘level up’ areas with weaker economies and achieve economic convergence.

The UK’s research base has many strengths, some truly world leading. But three main shortcomings currently inhibit it from playing its full role in economic growth. It is too small for the size of the country, it is relatively weak in translational research and industrial R&D, and it is too geographically concentrated in already prosperous parts of the country, often at a distance from where business conducts R&D.

The UK’s R&D intensity is too low

The UK’s overall R&D intensity is low. Measured as a ratio to (pre-COVID-19 crisis) gross domestic product (GDP), the Organisation for Economic Co-operation and Development (OECD) average is 2.37 per cent. The UK, at 1.66 per cent, is closer to countries like Italy and Spain than Germany or France.

The UK government has committed to matching the current OECD average by 2027, pledging an increase in public spending to £22 billion by 2025. Looking internationally shows us that substantial increases in R&D intensity are possible. Austria, Belgium, Denmark and Korea have all dramatically increased R&D intensity in recent decades. The major part of these increases is funded by the private sector, but public sector increases are almost always required alongside or in advance of this. The ratio of R&D funding from the two sources is typically 2:1, and this is a good rule of thumb for considering how increased R&D might be funded in the UK.

The UK’s R&D is highly regionally imbalanced

Looking at both the total level of spending on R&D and the ratio of public to private R&D spending is a good way to classify innovation systems within regions.
• The South East and East of England are highly research intensive with high investment by the state combined with business investment exceeding what we would expect from a 2:1 ratio.
• London and Scotland receive above-average levels of state investment but have lower- than-average levels of business investment.
• The East Midlands, the West Midlands and North West England are business-led innovation regions with business investment in R&D at or above the UK average but low levels of public investment.
• Wales, Yorkshire and the Humber, and North East England are regional economies with notably low R&D intensities in both the market and non-market-led sectors.
• South West England and Northern Ireland sit between these two groups with similarly low levels of public investment but slightly higher private sector spending on R&D.

A single sentence can summarise the extent to which the UK’s public R&D spending is centralised in just three cities. The UK regions and subregions containing London, Oxford and Cambridge account for 46 per cent of public and charitable R&D in the UK, but just 31 per cent of business R&D and 21 per cent of the population.

How the current funding system has led to inequality

The current situation is the result of a combination of deliberate policy decisions and a natural dynamic in which these small preferences combined with initial advantages are reinforced with time.

For example, of a series of major capital investments in research infrastructure between 2007 and 2014, 71 per cent was made in London, the East and South East of England, through a process criticised by the National Audit Office. The need for continuing revenue funding to support these investments lock in geographical imbalances in R&D for many years.

Imbalanced investment in R&D is, at most, only part of why the UK’s regional economic divides widened in the past and have failed to close in recent decades. But it is a factor that the government can influence. It has failed to do so. Where attempts have been made to use R&D to balance the UK’s economic strengths, they have been insufficient in scale. For example, in the 2000s the English regional development agencies allocated funding with preference to regions with weaker economies, but their total R&D spend was equivalent to just 1.6 per cent of the national R&D budget. These efforts could never have hoped to succeed. Unsurprisingly, and in contrast to vastly larger schemes in Germany, they failed.

We need to do things differently

The sums needed to rebalance R&D spending across the nation are substantial. A crude calculation shows that to level up per capita public spending on R&D across the nations and regions of the UK to the levels currently achieved in London, the South East and East England, additional spending of more than £4 billion would be needed: £1.6 billion would need to go to the North of England, £1.4 billion to the Midlands, £420 million to Wales, £580 million to South West England and £250 million to Northern Ireland. Spending in Scotland would be largely unchanged.

These numbers give a sense of the scale of the problem, but equalising per capita spending is not the only possible criterion for redistributing funding.

We want people to explore other criteria that might guide thinking on where UK public sector and charity spending on R&D is generating the most value possible. The online tool accompanying this paper models different geographical distributions of public R&D spending obtained according to the weight attached to factors such as research excellence, following business R&D spending, targeting economic convergence and investing more where the manufacturing sector is stronger.

Importantly, we do not propose that UK R&D funding is assigned purely by algorithm. We have found that the scale of current imbalances in funding and the scale by which current spending fails to meet even its own stated goal of funding excellence are widely underappreciated. Our tool aims to inform and challenge, not replace existing systems.

To spread the economic benefits of innovation across the whole of the UK, changes are needed. These will include a commitment to greater transparency on how funding decisions are made in the government’s existing research funding agencies, an openness to a broader range of views on how this might change and devolution of innovation funding at a sufficient scale to achieve a better fit with local opportunities.

For the full paper, see The Missing £4 billion: making R&D work for the whole UK.

The white heat of technology vs the cronut economy: two views on the productivity slowdown

A review of two books on innovation:

  • Windows of Opportunity: how nations create wealth, by David Sainsbury
  • Fully Grown: why a stagnant economy is a sign of success, by Dietrich Vollrath

  • As I write, the world economy is in a medically induced coma, as governments struggle to deal with the effects of the Covid-19 pandemic. But not everything was rosy in the developed world’s economies before the pandemic; the long term picture was one of declining labour productivity leading to stagnating living standards. Even after the pandemic has passed these problems will remain. These two books highlight the problem of falling productivity, but take diametrically opposing views about what’s caused the problem, and indeed on whether it is a problem at all.

    Where does productivity growth come from? An obvious answer is the development of new technologies. The late medieval invention of the blast furnace increased the amount of iron a man could produce a day by about a factor of 10. In the 18th century Richard Arkwright invented the water frame, and a single machine in his factory could do the work of tens or hundreds of spinners of yarn working at home. More recently, we’ve seen the work of scores of clerks, calculators and typists being replaced by inexpensive computers.

    But Dietrich Vollrath cautions us against equating productivity growth with technology: “From the perspective of economic growth, the word technology doesn’t mean anything. There is productivity growth, and that’s it.” At the centre of Vollrath’s book is an eloquent exposition of what’s become the mainstream economic theory of growth, originating with the work of Robert Solow, leading the the counterintuitive, but essentially comforting, conclusion that the slowdown in productivity we are living through is a sign of success, not failure.

    Vollrath’s book is a pleasure to read. It contains the clearest explanations I’ve ever read of the central concepts of growth accounting, such as what’s meant by “constant returns to scale”, and the significance of the Solow residual. His highlighting of the effect of demographic changes on productivity growth in the USA is illuminating and convincing (though of course this is USA centred and other countries will have different experiences). Yet I think it is too quick to dismiss the possibility that the slowdown in productivity growth we’ve seen in developed countries across the world is related to a real slow down in the rate of technological progress.

    David Sainsbury, unlike Dietrich Vollrath, is not an academic economist. As a former UK Science Minister, he looks to economic theory as a guide to policy, and he doesn’t like what he sees. To Sainsbury, the Solow theory, and its later elaborations, are bound to fail, because they fail to appreciate the complexity and heterogeneity of production in the modern world – in these theories, “it doesn’t matter whether a firm is producing potato chips or microchips”. The aim of Sainsbury’s book is to “look more closely at why neoclassical growth theory has proved such a poor guide to policy makers seeking to increase the growth rates of their countries, and why it is of so little use in explaining the growth performance of countries”.

    For Sainsbury, the key to economic growth is to be found at the level of firms – “a nation’s standard of living depends on the ability of its firms to attain a high and rising level of value-added per capita in the industries in which they compete”. Firms can do this by innovating to develop process improvements which drive up their productivity compared to their rivals. Or they can identify new market opportunities that open up as a result of technological developments.

    These technological opportunities are uneven – at any given time, one industry may be seeing dramatic increases in technological change (for example the ICT industry in the second half of the twentieth century), while other industry sectors may be relatively stagnating. The crucial trick is to identify those sectors where technological capabilities, together with matching market opportunities, open up the “windows of opportunity” of the book’s title.

    For Paul Romer and subsequent economists, what’s important for innovation is market power. As Vollrath discusses, market power is required for a firm to be able to innovate, because without market power the firm cannot charge the mark-ups it needs to compensate for the costs of innovation. “Without mark-ups there is no incentive to invest in R&D… Without R&D there are no non-rival innovations. And without non-rival innovations, there is no productivity growth.”

    In Vollrath’s account, market power can arise from government intervention, particularly through the assignment of intellectual property rights – the time-limited legal monopoly granted companies to profit from their inventions. It can also arise through the difficulty of reproducing manufacturing processes, because of the tacit knowledge inherent in them. But too much market power can limit innovation, too. As patent law in the USA has changed, more and more trivial innovations have become patentable, while the existence of “patent troll” firms, whose entire business model consists of suing firms for infringing their patent portfolio, demonstrates that too-lax intellectual property rights can lead to unproductive rent-seeking as well as innovation. For Vollrath, permissive patenting and a weakening of competition law have probably pushed the USA beyond the point at which too much market power leads to diminishing returns.

    What about the role of the government? For Vollrath, the government’s main role is to tax and regulate, and in a rather unexciting chapter he concludes that there’s no real evidence that over-taxation or over-regulation has had a material effect on productivity growth either way. The role of the government in driving innovation is entirely omitted.

    But governments have a crucial role here. The US government spent $121 billion on R&D in 2017 – and that wasn’t just academic research in universities; $24 billion worth of R&D carried out in companies was directly paid for by the federal government. I’ve discussed before (in my post “The semiconductor industry and economic growth theory”) how crucial government investment was in creating the semiconductor industry.

    Unsurprisingly, Sainsbury, as a former science minister, has a lot more to say about the way government spending on R&D can underpin a wider innovation system, identifying a fall of federal funds for research as a share of GDP as one factor underlying the USA’s declining innovation performance. The sections in his book on sectoral, national, regional and city innovation systems, carry both the positives and negatives of being written by a policy insider – very well informed, but with an occasional sense of defending the writer’s record in office. Sainsbury’s chapter on skills, though, is outstanding, reflecting the attention he and his foundation have given this important topic since leaving his government role.

    The neglect of government’s role in R&D in Vollrath’s book is consistent with his wider tendency to downplay technological innovation as a source of productivity growth. Instead, at the centre of his argument, is the idea that the productivity slowdown has arisen largely as a result of an economic shift from manufacturing to services, and that this is a good thing. Manufacturing tends to have faster productivity growth than services, so if more of the economy moves towards services, then necessarily average productivity growth will fall. But, to Vollrath, this represents the outcomes of rational choices by consumers, the natural and positive outcome of a fully grown economy.

    To understand this switch, we need to look to the work of the economist William Baumol. As I discussed in a previous post (“A toy model of Baumol’s cost disease”), Baumol introduced the important (but misleadingly named) concept of “cost disease”. If an economy has two sectors, one with fast productivity growth (for example in manufacturing) and another with much slower or non-existent growth (typically in services), then the sector with slower productivity growth will become relatively more expensive. It’s plausible to suggest that people will respond to this, in the context of the general increase in prosperity resulting from higher productivity in manufacturing, by buying more services, despite their greater relative cost. Hence there’s a tendency for the economy to become more weighted (by the value of their output) to services.

    Of course, this process has going on for centuries. Huge increases in the productivity with which we can produce of food, simple manufactured goods like textiles and homewares, and successively more technologically complex goods like cars and consumer electronics, mean that their prices have collapsed relative to personal services. Vollrath’s argument is that this process reached some kind of critical point in the year 2000: “what changed in 2000 was that the share of economic activity [of services] had reached such a high level that the drag on productivity growth from this shift finally became tangible.” There doesn’t seem to be a lot of evidence to support this particular timing.

    But there’s one important feature of Baumol’s argument that doesn’t emerge clearly at all in Vollrath’s book: that’s the way in which Baumol’s mechanism effectively transfers value from sectors with high productivity growth to sectors to sectors with low productivity growth. To illustrate this, let’s look at Vollrath’s prime example of an innovation not dependent on high technology, that has nonetheless raised productivity – the Cronut. For those of us outside the USA, I need to explain that a Cronut is a new kind of bun invented in New York, consisting of a deep-fried torus of croissant dough (the estimable British bakery chain Gregg’s trialed a similar confection in the UK, but it didn’t catch on). “I don’t know if Cronuts count as technology, but I do know they raised productivity because they led people to put a higher value on a given set of raw inputs”.

    It’s worth thinking through where this higher value comes from. We need to begin by being precise about what we mean by productivity. A non-economist might think of productivity in terms of the number of cronuts a worker might produce a day. This is the kind of productivity that can be increased by automation. Croissant dough consists of a laminate of many layers of yeast-leavened bread dough separated by butter, quite labour-intensive to make by hand, but using a mechanical dough-sheeter would greatly increase a worker’s output. To an economist like Vollrath, it isn’t this kind of output productivity that’s being talked about, though. For an economist, productivity is measured in terms of the money value of the output. If you run a small bakery, and you increase your output tenfold by installing a dough-sheeter, as long as you have a market to sell your increased output at the same price, you have increased both types of productivity – you produce more cronuts, and you make more money.

    But in the long term, and over whole economies, output productivity and money productivity don’t behave in the same way, because of Baumol’s cost disease mechanism. One might suspect that our New York artisanal cronut makers resist the lure of industrial dough-sheeters and the like, and rely on the same technologies that their nineteenth century antecedents did. Although the output productivity of their baked and deep fried goods would be unchanged, the real money value of what they produce would be greater, just because of Baumol’s cost disease.

    To the extent that patisserie has seen low growth in its output productivity since the 19th century, while there have been order of magnitude increases in the number of motor cars or record players or washing machines produced by a single worker, the artisanal patisserie sector will have been affected by Baumol’s cost disease. This will have raised the relative price of cronuts compared to a basket of other manufactured products, whose sectors have seen much bigger productivity increases. Thus the reason that cronuts cost more in 21st century New York than they would have in 19th century Paris (where the technology to make them certainly existed) is because of the 20th century revolution in productivity in other sectors.

    So, one very effective way to increase money productivity in sectors with low output productivity growth is to increase the output productivity growth in some other sector. It’s not so much that a rising tide lifts all boats, but that the leading sectors pull everything else along behind them. For this reason, I think Vollrath underestimates the importance of sectors seeing rapid growth in output productivity – the very sectors that Sainsbury stresses one should support and emphasise in his book.

    It is, of course, unfortunate that Vollrath has written an essentially optimistic book about the economy that’s been released precisely at the moment of a historically unprecedented economic downturn. But there is a much more serious omission.

    There’s not a single mention in the book of the problem of climate change, or the challenge of transitioning a world economy that depends on fossil fuels to low carbon energy sources. In talking about the inputs to economic growth, Vollrath says “we could also consider the stocks of natural resources, but these are bit players in the story”. This comment is very telling.

    Energy is relatively very much cheaper now than it was a few hundred years ago. The technology of extracting fossil fuels has allowed many more units of energy to be extracted for a given set of inputs – most recently, for example, in the fracking revolution that has transformed the USA’s energy economy. So, following Baumol’s principle, the relative price of energy has fallen.

    But this doesn’t mean the relative importance of energy has dropped with the price – as we will find out if we have to do without it. If we don’t find – through large scale technological innovation – zero carbon alternative sources of energy at lower cost to fossil fuels, we will either have to suffer the loss of living standards – and indeed loss of life – that will follow from unmitigated climate change, or we will have to accept that economic growth will go into reverse. Energy prices will increase and we will all be worse off.

    In fact, Vollrath doesn’t just underestimate the role of technological innovation in growth up to now, he’s actually positively sceptical about whether we need any more: “given our current life expectancy and living standards the risks inherent in any technology … may not be worth pursuing just to add a fraction of a percentage point to the growth rate”. On this, I think he could not be more wrong. We urgently need new technology, not to add a percentage point to the growth rate, but precisely so we can maintain our current life expectancy and living standards – not to mention allow the rest of the world to enjoy what we, in rich countries, take for granted.

    A toy model of Baumol’s cost disease

    I’ve recently read Dietrich Vollrath’s book “Fully Grown: why a stagnant economy is a sign of success”. It’s interesting and well-written, though I’m not entirely convinced by the conclusion that the sub-title summarises. I’ll write more about that later, but it did prompt me to think more about Baumol’s cost disease, something I’ve written about in an earlier post: How cheaper steel makes nights out more expensive (and why that’s a good thing).

    In this well-known phenomenon, a differential in productivity growth rate between goods and services leads both to the cost of services relative to goods increasing, and to services taking a larger share of the economy. It’s this shift of the economy from high-productivity-growth goods manufacturing into low-productivity-growth services that Vollrath ascribes part of our current growth slowdown to, and he thinks this is entirely positive.

    Vollrath introduces a simple toy model to think about Baumol’s cost disease. It’s simple enough to express this mathematically, but when you do this it produces some apparently paradoxical results. I think reflecting on these paradoxes can give some insight into the difficulties of measuring growth in an economy in which one sector advances much faster than another. As I’ve written before, this highly uneven technological progress is very characteristic of our economy, where, for example, we’ve seen orders of magnitude increase in computing power in the last century, while in other sectors, like construction, little has changed. For the mathematical details, see these notes (PDF) – here I summarise some of the main results. Continue reading “A toy model of Baumol’s cost disease”