With the Commons Science Select Committee on “The role of technology, research and innovation in the COVID-19 recovery”

The House of Commons Select Committee on Science and technology visited Manchester on 21st September, and I was asked to give oral evidence, with others, to its inquiry on “The role of technology, research and innovation in the COVID-19 recovery”. The full, verbatim, transcript is available here; here are a few highlights.

My opening statement

Chair: Perhaps I can start with a question to Professor Jones. Everybody around the world associates Manchester with technology over the ages, but if we look at the figures, the level of research and development spending investment, in the north-west at least, is below the national average. Give us a feeling for why that might be and whether that is inevitable and reflects things that we cannot help or what we should be doing about it, bearing in mind that we will be going into a bit more detail later in the session.

Professor Jones: On the question of the concentration of research, this is something that has happened over quite a long time. The figure that I have in my mind is that 46% of all public and charitable R&D happens in London and the two regions that contain Oxford and Cambridge. There is no doubt—it is not just a question of Manchester—that the distribution of public research money across the country is very uneven.

That has been a consequence partly of deliberate decisions—there has been a time when the idea has been, particularly when funding seemed tight, that it would be better to concentrate money in a few centres—but when it is given out competitively without regard for place, there is a natural tendency for concentration. Good people go to where existing facilities are. That allows you to write stronger bids and in that case there is a self-reinforcing element. That process is played out over quite a long time. It has got us to the situation of quite extreme imbalance.

I have been talking there about public R&D. It is very important to think about private R&D as well. There is an interesting disparity between where the private sector invests its R&D money and where the public sector does. One finds places like Cambridge, which are remarkable places, where there is a lot of public sector R&D but then the private sector piles in with a great deal of money behind that. Those are great places that the country should be proud of and encourage. Particularly in the north-west, in common with the east midlands and west midlands, too, the private sector is investing quite a lot in R&D, but the public sector is not following those market signals and, in a sense, exploiting what in many ways are innovation economies that could be made much stronger by backing that up with more public funding.

On excellence and places

Graham Stringer: This is my final question on this section. The drift of great scientists to the golden triangle has been going on for a long time. Rutherford discovered the nucleus of the atom a quarter of a mile down the road in what is now a committee room, sadly. Rutherford left Manchester and went to the Cavendish afterwards. Do you think it is possible to stop that drift, because money also follows great scientists as well as institutions? The University of Manchester is a world-class university, but do you think it is possible to stop that drift and get University of Manchester, and some of the other great northern universities, up the pecking order to be in the same region as Imperial, Oxford and Cambridge?

Professor Jones: Yes, there is scope to do that. You mentioned Rutherford. I used to teach in the Cavendish myself, so I have made the reverse journey.

The point that is important, if we talk about excellence, is that people loosely say Cambridge is excellent. Cambridge is not excellent. Cambridge is a place that has lots of excellent people. The thing that defines excellence is people, and people will respond to facilities. If we create excellent facilities, we create an excellent environment, then excellent people from all over the world will want to come to those places.

It is possible to be too deterministic about this. One can create the environment that will attract excellent people from all over the world. That is what we ought to aim to do if we want to spread out scientific excellence across the country.

Graham Stringer: To simplify: the answer is for investment in absolutely world-class kit in universities away from the golden triangle?

Professor Jones: It is world-class kit, but it is also the wider intellectual climate: excellent colleagues. People like to go where there are excellent colleagues, excellent students. That is the package that you need.

On “levelling-up” and R&D spending

Chair: As you say, clearly it would not be a step towards achieving the status of a science superpower if we were reducing core budget, so the opportunity to have a greater quantity of regional investment comes from an increase in the budget. Is it fair to infer logically from that that, of the increase, you would expect a higher proportion to be regionally distributed than the current snapshot of the budget?

Professor Jones: Yes, absolutely. If we take the Government at their word about saying that there are going to be genuine increases in R&D, this does give us a unique opportunity because we have had quite flat research budgets for a couple of decades. Up to now we have always been faced with that problem: do you really want to take money away from the excellence of Oxford and Cambridge to rebalance? That is a difficult issue because, as I said in my opening remarks, Cambridge is a fantastic asset to the UK’s economy. But if we do have this opportunity to see rising budgets, if we are going from £14.9 billion to £22 billion—that is £7 billion of rise that has been pencilled in—it would be very disappointing if a reasonable fraction of that was not ring-fenced to start to address these imbalances, specifically with the aim of boosting the economy of those places with productivity that is too low needs to be raised.

I think that tying it very directly to the Government’s goals of levelling up, increasing the productivity of economically lagging regions as well as their other very important goals of net zero, would be entirely reasonable.

Chair: That is literally and specifically what you are describing, is it not—levelling up, in the sense that you have said you do not want to take down the budgets of existing institutions, you want to increase the others? That is levelling up.

Professor Jones: Indeed.

On the £22 billion target for UK government R&D

As the pandemic moves to a new phase, it’s natural to assume that the Prime Minister would want to make progress on the other agendas that he might hope would define his tenure. Prominent in these has been his emphasis on the need to restore the UK’s place as a “Science Superpower” – for example, in his 15 July speech he said: “We are turning this country into a science superpower, doubling public investment in R and D to £22 billion and we want to use that lead to trigger more private sector investment and to level up across the country”, a theme he’d set out in detail in a 21 June article in the Daily Telegraph. The theme of boosting science and innovation is also crucial to the government’s other key priorities, “levelling up” by reducing the gross geographical disparities in economic performance and health outcomes across the UK, delivering on the 2050 “Net Zero” target, and securing a strategic advantage in defence and security through science and technology in an increasingly uncertain world.

And yet, the public finances are in disarray following the huge increase in borrowing needed to get through the pandemic, the economy has suffered serious and lasting damage, and the talk is of a very tight spending settlement in the upcoming comprehensive spending review, given the need for further spending in the NHS and education systems to start to repair ongoing costs and the lasting aftermath of the pandemic. How robust will the Prime Minister’s commitment to science and innovation be in the face of many other pressing demands on public spending, and a Treasury seeking to bring the public finances back towards more normal levels of borrowing? The government is committed to two R&D numbers – a long term target, of increasing the total R&D intensity – public and private – of the UK economy to 2.4% by 2027. Another shorter term promise – of increasing government spending on research and development to £22 billion by 2024 – was introduced in the 2020 budget, and has recently been reasserted by the Prime Minister – though without a date attached.

Some people might worry that the government could seek to resolve this tension by creative accounting, finding a way to argue that the letter of the commitments has been met, while failing to fulfil their spirit. It would be possible to bend the figures to do this, but this would be a bad idea that would put in jeopardy the government’s larger stated intentions. In particular, anything that involves a reclassification of existing expenditure rather than an overall increase, will do nothing to help the overall goal of making the UK’s economy more R&D intensive, and achieving the overall 2.4% R&D intensity target.

To begin with, let’s look at the current situation. A breakdown of government spending in real terms shows that we have seen real increases, although it will not have felt that way to university-based scientists depending on research council and block grant funding. The increase the plot shows on the introduction of UKRI partly reflects an accounting artefact, by which spending in InnovateUK has been shifted out of the BEIS department budget into UKRI, but in addition to this there was a genuine uplift through programmes such as the “Industrial Strategy Challenge Fund”. These figures also include a notional sum for the UK’s contribution to the EU research programmes. In the future, assuming the question of the UK’s association with the Horizon programme is finally resolved satisfactorily, contributions to the EU research programmes will continue to be included in total R&D investment, as they should be, in my opinion. The uplift we’ve seen in the current year includes a contribution for EU programmes, as well as some substantial increases in R&D funding by government departments, especially the Ministry of Defence (overall departmental spending outside UKRI is dominated by the MoD and the Department of Health and Social Care).

Total government spending on R&D from 2008 to 2019 as recorded by ONS, in real terms. 2021 is announced commitment. UKRI figure excludes Research England, but includes InnovateUK funding, previously recorded as department spending in BEIS (here included in “rest of government”). Research England is included in “HE research block grants”, which also includes university research funding from Devolved Administrations and, pre-2018, HEFCE. Data: Research and development expenditure by the UK government: 2019. ONS April 2021

To summarise, it is true to say that government spending is higher now in real terms than it has been for more than a decade. But this still doesn’t look like a trajectory heading towards a doubling, and £22 billion looks a long way away.

Of course, these are figures that are corrected for inflation; we will see a more flattering picture if we neglect this. And there might be some flexibility over the timescale for achieving the £22 billion target. My next plot shows the overall trajectory of government spending on R&D, in current money, without an inflation correction. The red line indicates the path required to achieve the £22 billion by the original date, 2024. This would require a substantial increase in the rate of the growth we have seen in recent years. But one might argue that the pandemic has forced a slippage of the timescale, which might be aligned with the 2027 target for achieving the overall 2.4% of GDP R&D intensity target. This does look achievable with current rate of nominal growth – and of course the pulse of inflation we’re likely to see now will make achieving £22 billion even easier (albeit at the cost of less real research output).

Total government spending on R&D from 2008 to 2019 as recorded by ONS, in current money (non-inflation corrected) terms. 2021 is announced commitment. Data: Research and development expenditure by the UK government: 2019. ONS April 2021

But there is another possibility for modifying the figures that may well have occurred to someone on Horse Guards Road. That would be to include the cost of R&D tax credits. These are subsidies offered by the government for research carried out in business, that according to circumstances can be taken as a reduction in corporation tax liability or a direct cash payment from government (the latter being particularly important for early stage start-ups that aren’t yet generating a profit). This, then, is another cost incurred by the government related to R&D, representing not direct R&D spending, but lost tax income and outgoings. It’s perhaps not widely enough appreciated how much more generous this scheme has become over the last decade, and there is a separate discussion to be had about how effective these schemes are, and the value for money of this very substantial cost to government.

My next plot shows the effect of adding on the cost of R&D tax credits to direct government spending on R&D. The effect is really substantial – and would seem to put the £22 billion target within reach without the government having to spend any more money on R and D.

Government spending on R&D from 2008 to 2019, as above, together with the cost of R&D tax credits (includes an HMRC estimate for total cost in 2019). The estimate for 2021 combines the committed figure for government R&D with the assumption that the cost of R&D tax credit remains the same in real terms as its 2019 value. All values corrected for inflation and expressed in 2019 £s. Uncorrected for the mismatch between fiscal years, by which R&D tax credit data is collected, and calendar years, for R&D spending.

We can make the situation look even better by not accounting for inflation. This is illustrated in the next figure.

Government spending on R&D from 2008 to 2019, as above, together with the cost of R&D tax credits (includes an HMRC estimate for total cost in 2019). The estimate for 2021 combines the committed figure for government R&D with the assumption that the cost of R&D tax credit remains the same in real terms as its 2019 value. All values in nominal current cash terms (uncorrected for inflation). Uncorrected for the mismatch between fiscal years, by which R&D tax credit data is collected, and calendar years for R&D spending.

This not only gets us very close to £22 billion spending ahead of target, but also might allow one to argue that the Prime Minister’s claim of doubling government investment in R&D had been fulfilled, taking the timescale to be a decade of Conservative-led governments.

What would be wrong with this? It would actually lock in a real terms erosion of spending on R&D, and would not help deliver the more enduring target, of raising the R&D intensity of the UK economy to 2.4% of GDP by 2027, most recently reasserted in the HM Treasury document “Build back better: Our Plan for Growth”. This target is for combined business and public sector funding – but redefining government R&D spending to include the government’s subsidy of R&D in business simply moves spending from one heading to another, without increasing its total. This would stretch to breaking to breaking point the larger claim, that the government is “turning this country into a science superpower”.

Currently, while the UK’s science base has many strengths, it is too small for an economy of the size of the UK. If we measure the R&D intensity of the economy in terms of total investment – public and private – as a proportion of GDP, the UK is a long way behind the leaders, as my next plot shows. By R&D intensity, the UK is not a leader, or even an average performer, but in the third rank, between Italy and Czechia. The UK’s science base is a potential source of economic and strategic advantage, but it is currently too small.


R&D intensity of selected nations by sector of performance, 2018. Data: Eurostat.

This is widely recognised by policy makers, and is the logic behind the 2.4% R&D intensity target that was in the Conservative manifesto, and has since been frequently reasserted, for example in the Treasury’s “Plan for Growth”. In one sense, this is still not a very demanding target – assuming (unrealistically) that the R&D intensity of other countries remains the same, far from elevating the UK to be one of the leaders, it would leave it just behind Belgium.

How much would the UK’s R&D spending need to increase to meet the 2.4% R&D intensity target? This, of course, depends on what happens to the denominator – GDP – in the meantime. Based on the OBR’s March estimates, we might expect GDP in 2027 to be around £2.4 trillion, up from £2.22 trillion in 2019, before the effects of the pandemic (both in 2019 £s). So to meet the target, total R&D spending – public and private – would need to be £58 billion.

This compares to total R&D spending in 2019 of £38.5 billion, split almost exactly 1/3:2/3 between the public and private sectors (this figure includes expenditure associated with the R&D tax credits, but this appears here on the private sector side). So to achieve the 2.4% target, there needs to be a 50% real terms increase in both public and private sector R&D relative to 2019. If the same split between public and private is maintained, we’d need £19.4 billion public and £38.8 billion business R&D.

There are two points to make about this. Firstly, we note that the estimate for the required public R&D is actually lower than the £22 billion government promise. This reflects the fact that the 2.4% target is less demanding than it used to be, since we now think we’ll be poorer in 2027 than we expected in 2019. However, remember that this is a figure in 2019 £s – so in real terms inflation will push up the nominal value. In fact, a very rough estimate based on the OBR’s inflation projections suggest that £22 billion nominal by 2027 as the government’s investment in R&D could be about right. On the other hand, OBR’s March 2021 forecast was quite pessimistic compared to other forecasters. If GDP turns out to recover more fully from the pandemic than the OBR thought, then R&D spending will need to be higher to meet the target – but this will be easier to fund given a larger economy.

Secondly, note that this split between public and private sector is based on where the research is carried out, not who funds it. So the subsidy provided by the government in the form of R&D tax credits appears on the private sector side of the ledger. Unless the split between public and private dramatically changes over the next few years, then the £22 billion the government needs to put in can’t include the cost of R&D tax credits.

Targets are important, but they are a means to an end. The Plan for Growth identifies “Innovation” as one of three pillars of growth. A return to economic growth, after a decade of stagnation of productivity growth and living standards, capped by a devastating pandemic, is crucial in itself. But innovation also directly underpins the government’s three main priorities.

The first of these is the transition to net zero. This requires a wrenching change of the whole material base of our economy; it needs innovation to drive costs down – and to make sure that it is the UK’s economy and the UK’s communities that benefit from the new economic opportunities that this transition will bring.

The second is “levelling up”, which, to be more than a slogan, should involve a sustained attempt to increase the productivity of the UK’s lagging cities and regions through increasing innovation and skills. This won’t be possible without correcting the imbalance in government R&D investment between the prosperous Greater Southeast and the lagging rest of the country. We shouldn’t put at risk the outstanding innovation economies we do have in places like Cambridge, so this needs to involve new money at a scale that will make a material difference.

Finally, we need to rethink the UK’s position in the world. Part of this is about making sure the UK remains an attractive destination for inward investment by companies at the global technological frontier, and that the UK’s industries can produce internationally competitive products and services for export. But the world has also got more dangerous, and the modernisation of the UK’s armed forces and security agencies needs to be underpinned by R&D.

The government will not be able to achieve these goals without a real increase in R&D spending. That does not mean, however, that we should just do more of the same. We need more emphasis on the “development” half of R&D, we need to coordinate the strategic research goals of the government better across different departments, we need to support the development of internationally competitive R&D clusters outside the southeast, all the while sustaining and growing our outstanding discovery science.

So, could the government game its £22 billion R&D promise, by reclassifying the cost of the R&D tax credit as government investment in R&D and ignoring the effect of inflation? Yes, it could.

Should it? No, it should not. To do so would put the 2.4% R&D intensity target out of reach. It would seriously undercut the government’s main priorities – net zero, “levelling up” – and keeping the UK secure in a dangerous world. And it would put paid to any pretensions the UK might have of recovering “science superpower” status.

Instead, I believe there needs to be realism and honesty – both about the difficulty of the post-pandemic fiscal situation, and the need for genuine increases in government spending in R&D if its long term economic and climate goals are to be met. £22 billion is about the right figure to aim for, but if the extraordinary circumstances of the pandemic make it difficult to achieve this on the original timescale, the government should set out a new timescale, with a fully developed timetable outlining how this increase will be achieved in order to deliver the 2027 2.4% R&D intensity target. Early increases should focus on the areas of highest priority – with, perhaps, highest priority of all being given to net zero. Climate change will not wait.

Bleach and the industrial revolution in textiles

Sunshine is the best disinfectant, they say – but if you live in Lancashire, you might want to have some bleach as a backup. Sunshine works to bleach clothes and hair too – and before the invention of the family of chlorine based chemicals that are commonly known as bleach, the Lancashire textile industry – like all other textile industries around the world – depended on sunshine to whiten the naturally beige colour of fabrics like cotton and linen. It’s this bright whiteness that has always been prized in fine fabrics, and is a necessary precondition for creating bright colours and patterns through dyeing.

As the introduction of new machinery to automate spinning and weaving – John Kay’s flying shuttle, the water frame, and Crompton’s spinning mule – hugely increased the potential output of the textile industry, the need to rely on Lancashire’s feeble sunshine to bleach fabrics in complex processes that could take weeks was a significant blockage. The development of chemical bleaches was a response to this; a significant ingredient of the industrial revolution that is perhaps not widely appreciated enough, and an episode that demonstrates the way scientific and industrial developments went hand-in-hand at the beginning of the modern chemical industry.

It’s not obvious now when one looks at the clothes in 17th and 18th century portraits, with their white dresses, formal shirts and collars, that the brilliant white fabrics that were the marker of their rich and aristocratic subjects were the result of complex and expensive set of processes. Bleaching at the time involved a sequence of repeated steepings in water, boiling in lye, soaping, soaking in buttermilk (and towards the end of this period, dilute sulphuric acid) – together with extensive “grassing” – spreading the fabrics out in the sun in “bleachfields” for periods of weeks. These expensive and time-consuming processes were a huge barrier to the expansion of the textile industry, and it was in response to this barrier that chemical bleaches were developed in the late 18th century.

The story begins with the important French chemist Claude-Louis Berthollet, who in 1785 discovered and characterised the gas we now know as chlorine, synthesising it through the reduction of hydrochloric acid by manganese dioxide. His discovery of what he called “dephlogisticated muriatic acid” [1] was published in France, but news of it quickly reached England, not least through direct communication by Berthollet to the Royal Society in London. Only a year later, the industrialist Matthew Boulton and his engineer partner James Watt were visiting Paris; they met Berthollet, and were able to see his initial experiments showing the effect chlorine had on colours, either using the gas directly or in solution in water. The potential of the new material to transform the textiles industry was obvious both to Berthollet and his visitors from England.

James Watt had a particular reason to be interested in the process – his father-in-law, James McGrigor – owned a bleaching works in Glasgow. Watt had soon developed an improvement to the process for making chlorine; instead of using hydrochloric acid, he used sulphuric acid and salt, exploiting the new availability and relative low cost of sulphuric acid since the development of the lead chamber process in 1746 by John Roebuck and Samuel Garbett. In 1787 he sent a bottle of his newly developed bleach to his father-in-law, and arranged for a ton of manganese dioxide [2] to be sent from Bristol to Glasgow to begin large scale experiments. Work was needed to develop a practical regime for bleaching different fabrics, to find methods to assay the bleaching power of the solutions, and to develop the apparatus of this early chemical engineering – what to make the vessels out of, how to handle the fabric. By the end of the year, with the help of Watt, McGrigor had successfully scaled up the process to bleach 1500 yards of linen.

Meanwhile, two Frenchmen – Antoine Bourboulon de Boneuil and Matthew Vallet – had arrived in Lancashire from Paris, where they had developed a proprietary bleaching solution – “Lessive de Javelle” – which built on Berthollet’s work (without his involvement or approval). This probably used the method of dissolving the chlorine in a solution of sodium hydroxide, which absorbs more of the gas than pure water. This produces a solution of sodium hypochlorite, like the everyday “thin bleach” of today’s supermarket shelves. In 1788 Bourboulon petitioned Parliament to grant them an exclusive 28 year license for the process (a longer period than a regular patent). This caused some controversy and was strongly opposed by the Lancashire bleachers, but placed James Watt in an awkward position. Naturally he opposed the proposal, but didn’t want to do this too publicly, as his own, very broad, patent (with Matthew Boulton) for the steam engine had been extended by Act of Parliament in 1775, leading to lengthy litigation. Nonetheless, after the intervention of Berthollet himself and the growing awareness of the new science of chemical bleaching in the industrial community, Bourboulon only succeeded in obtaining patents for relatively restricted aspects of his process, that were easily evaded by other operations.

Claude-Louis Berthollet’s position in this was important, as his priority in discovering the basic principles of chlorine bleaching was universally accepted. But Berthollet was an exponent of the principles of what would now be called “open science” and consciously repudiated any opportunities to profit from his inventions – as he wrote to James Watt, “I am very conscious of the interest that you take in a project which could be advantageous to me; but to return to my character, I have entirely renounced involvement in commercial enterprises. When one loves science, one has little need of fortune, and it is so easy to expose one’s happiness by compromising one’s peace of mind and embarrassing oneself”. Watt was clearly frustrated by Berthollet’s tendency to publish the results of his experiments, which often included rediscovering the improvements that Watt himself had made.

But by this stage, any secrets were out, and other Manchester industrialists, together with a new breed of what might be called consulting chemists, who kept up with the latest scientific developments in France and England, were experimenting and developing the processes further. Their goals included driving down the cost, increasing the scale of operations, and particularly improving their reliability – it was all too easy to ruin a batch of cloth by exposing it too long or using too strong a bleaching agent, or to poison the workmen with a release of chlorine gas. In fact, one shudders to think about the health and safety record and environmental impact of these early developments. Even by 1795, it still wasn’t always clear that the new methods were cheaper than the old ones, particularly for case of linen, which was significantly more difficult to bleach than cotton. Despite the early introduction of “Lessive de Javelle”, the stability of bleaching fluids was a problem, and most bleachers preferred to brew up their own as needed, guided by lots of practical experience and chemical knowledge.

Bleaching probably could never be made entirely routine, but the next big breakthrough was to create a stable bleaching powder which could be traded, stored and transported, and could be incorporated in a standardised process. Some success had been had by absorbing chlorine in lime. The definitive process to make “bleaching powder” by absorbing chlorine gas in damp slaked lime (calcium hydroxide), to produce a mixture of calcium hypochlorite and calcium chloride, was probably developed by the Scottish chemist Charles Macintosh (more famous as the inventor of the eponymous raincoat). The benefits of this discovery, though, went to Macintosh’s not wholly trustworthy business partner, Charles Tennant, who patented the material in 1799.

What are the lessons we can learn from this episode? It underpins the importance of industrial chemistry, an aspect of the industrial revolution that perhaps is underplayed. It’s a story in which frontier science was being developed at the same time as its industrial applications, with industrialists understanding the importance of being linked in with international networks of scientists, and organisations like the Manchester Literary and Philosophical Society operating as important institutions for diffusing the latest scientific results. It exposes the tensions we still see between open science and the protection of intellectual property, and the questions of who materially benefits from scientific advances.

As the nineteenth century, the textile industry continued to be a major driver of industrial chemistry – the late 18th century saw the introduction of the Leblanc process for making soda-ash, and the nineteenth century saw the massive impact of artificial dyes. These developments influence the industrial geography of England’s northwest to this day.

[1] When Berthollet discovered chlorine, it was in the heyday of the phlogiston theory, so, not appreciating that what he’d discovered was a new gaseous element, he called it “dephlogisticated muriatic acid” (muriatic acid being an old name for hydrochloric acid). As Lavoisier’s oxygen theory became more widely accepted, the gas became known as “oxymuriatic acid”. It was only in 1810 that Humphry Davy showed that chorine contains no oxygen, and is in fact an element in its own right. Phlogiston has a bad reputation as a dubious pre-scientific relic, but it was a rational way of beginning to think about oxidation and reduction, and the nature of heat, giving a helpful guide to experiments – including the ones that eventually showed that the concept was unsustainable.

[2] It’s interesting to ask why there was an existing trade in manganese dioxide. This mineral had been used since prehistory as a black pigment, and is unusual as a strong oxidising agent that is widely found in nature. In Derbyshire it occurs as an impure form known to miners as “wad”; when mixed with linseed oil (as you would do to make a paint) it occasionally has the alarming property of spontaneously combusting. This was recorded in a 1783 communication to the Royal Society by the renowned potter Josiah Wedgwood, who ascribed the discovery to a Derby painter called Mr Bassano, and reported seeing experiments showing this property at the house of the President of the Royal Society, Sir Joseph Banks. Spontaneous combustion isn’t a great asset for a paint, but at lower loadings of manganese dioxide a less dramatic acceleration of the oxidation of linseed oil is useful in making varnish harden more quickly, and it was apparently this property that led to its widespread use in paints and varnishes, particularly for ships in the great expansion of the British Navy at the time. More pure deposits of manganese dioxide were found in Devon, and subsequently in North Wales, as the bleach industry increased demand for the mineral further. The material gained even more importance following Robert Mushet’s work on iron-manganese alloys – it was the incorporation of small amounts of manganese that made the Bessemer process for the first truly mass produced steel viable.

[3] Sources: this account relies heavily on “Science and Technology in the Industrial Revolution”, by A. E. Musson and E. Robinson. For wad, “Derbyshire Wad and Umber”, by T.D. Ford, Mining History 14 p39.

Edited 23/8/21 to make clear that Bourboulon’s petition to Parliament was for a longer period of exclusivity than a standard patent. My thanks to Anton Howes for pointing this out.

Reflections on the UK’s new Innovation Strategy

The UK published an Innovation Strategy last week; rather than a complete summary and review, here are a few of my reflections on it. It’s a valuable and helpful document, though I don’t think it’s really a strategy yet, if we expect a strategy to give a clear sense of a destination, a set of plans to get there and some metrics by which to measure progress. Instead, it’s another milestone in a gradual reshaping of the UK’s science landscape, following last year’s R&D Roadmap, and the replacement of the previous administration’s Industrial Strategy – led by the Department of Business, Energy and Industrial Strategy – by a Treasury driven “Plan for Growth”.

The rhetoric of the current government places high hopes on science as a big part of the UK’s future – a recent newspaper article by the Prime Minister promised that “We want the UK to regain its status as a science superpower, and in so doing to level up.” There is a pride in the achievements of UK science, not least in the recent Oxford Covid vaccine. And yet there is a sense of potential not fully delivered. Part of this is down to investment – or the lack of it: as the PM correctly noted: “this country has failed for decades to invest enough in scientific research, and that strategic error has been compounded by the decisions of the UK private sector.”

Last week’s strategy focused, not on fundamental science, but on innovation. As the old saying goes, “Research is the process of turning money into ideas, innovation is turning ideas into money” – and, it should be added, other desirable outcomes for the nation and society – the necessary transition to zero carbon energy, better health outcomes, and the security of the realm in a world that feels less predictable. But the strategy acknowledges that this process hasn’t been working – we’ve seen a decline in productivity growth that’s unprecedented in living memory.

This isn’t just a UK problem – the document refers to an apparent international slowing of innovation in pharmaceuticals and semiconductors. But the problem is worse in the UK than in comparator nations, and the strategy doesn’t shy away from connecting that with the UK’s low R&D intensity, both public and private: “One key marker of this in the UK is our decline in the rate of growth in R&D spending – both public and private. In the UK, R&D investment declined steadily between 1990 and 2004, from 1.7% to 1.5% of GDP, then gradually returned to be 1.7% in 2018. This has been constantly below the 2.2% OECD average over that period.”

One major aspiration that the government is consistent about is the target to increase total UK investment in R&D (public and private) to reach 2.4% of GDP by 2027, from its current value of about 1.7%. As part of this there is a commitment to increase public spending from £14.9 bn this year to £22 bn – by a date that’s not specified in the Innovation Strategy. An increase of this scale should prompt one to ask whether the institutional landscape where research is done is appropriate, and the document announces a new review of that landscape.

Currently the UK’s public research infrastructure is dominated by universities to a degree that is unusual amongst comparator nations. I’m glad to see that the Innovation Strategy doesn’t indulge in what seems to be a widespread urge in other parts of government to denigrate the contribution of HE to the UK’s economy, noting that “in recent years, UK universities have become more effective at attracting investment and bringing ideas to market. Their performance is now, in many respects, competitive with the USA in terms of patents, spinouts, income from IP and proportion of industrial research.” But it is appropriate to ask whether other types of research institution, with different incentive structures and funding arrangements, might be needed in addition to – and to make the most of – the UK’s academic research base.

But there are a couple of fundamentally different types of non-university research institutions. On the one hand, there are institutions devoted to pure science, where investigators have maximum freedom to pursue their own research agendas. Germany’s Max Planck Institutes offer one model, while the Howard Hughes Medical Institute’s Janelia Research Campus, in the USA, has some high profile admirers in UK policy circles. On the other hand, there are mission-oriented institutes devoted to applied research, like the Fraunhofer Institutes in Germany, the Industrial Technology Research Institute in Taiwan, and IMEC (the Interuniversity Microelectronics Centre) in Belgium. The UK has seen a certain amount of institutional evolution in the last decade already, with the establishment of the Turing Institute, the Crick Institute, the Henry Royce Institute, the Rosalind Franklin Institute, the network of Catapult Centres, to name a few. It’s certainly timely to look across the landscape as it is now to see the extent to which these institutions’ missions and the way they fit together in a wider system have crystallised, as well as to ask whether the system as a whole is delivering the outcomes we want as a society.

There is one inescapable factor about the institutional landscape we have now that is seriously underplayed – that is that what we have now is a function of the wider political and economic landscape – and the way that’s changed over the decades. For example, there’s a case study in the Innovation Strategy of Bell Laboratories in the USA. This was certainly a hothouse of innovation in its heyday, from the 1940’s to the 1980’s – but that reflected its unique position, as a private sector laboratory that was sustained by the monopoly rents of its parent. But that changed with the break-up of the Bell System in the 1980’s, itself a function of the deregulatory turn in US politics at the time, and the institution is now a shadow of its former self. Likewise, it’s impossible to understand the drastic scaling back of government research laboratories in the UK in the 1990’s without appreciating the dramatic policy shifts of governments in the 80’s and 90’s. A nation’s innovation landscape reflects wider trends in political economy, and that needs to be understood better and the implications made more explicit.

With the Innovation Strategy was published a “R&D People and Culture Strategy”. This contains lots of aspirations that few would disagree with, but not much in the way of concrete measures to fix things. To connect this with the previous discussion, I would have liked to have seen much more discussion of the connection between the institutional arrangements we have for research, the incentive structure produced by those arrangements, and the culture that emerges. It’s a reasonable point to complain that people don’t move as easily from industry to academia and back as they used too, but it needs to be recognised that this is because the two have drifted apart; with only a few exceptions, the short term focus of industry – and the high pressure to publish on academics – makes this mobility more difficult. From this perspective, one question we should ask about our institutional landscape, is whether it is the right one to allow the people in the system to flourish and fulfil their potential?

We shouldn’t just ask in what kind of institutions research is done, but also where those are institutions situated geographically. The document contains a section on “Levelling Up and innovation across the UK”, reasserting as a goal that “we need to ensure more places in the UK host world-leading and globally connected innovation clusters, creating more jobs, growth and productivity in those areas.” In the context of the commitment to increase the R&D intensity of the economy, “we are reviewing how we can increase the proportion of total R&D investment, public and private, outside London, the South East, and East of England.”

The big news here, though, is that the promised “R&D and Place Strategy” has been postponed and rolled into the forthcoming “Levelling Up” White Paper, expected in the autumn. If this does take the opportunity of considering in a holistic way how investments in transport, R&D, skills and business support can be brought together to bring about material changes in the productivity of cities and regions that currently underperform, that is not a bad thing. I was a member of the advisory group for the R&D and Place strategy, so I won’t dwell further on this issue here, beyond saying that I recognise many of the issues and policy proposals which that body has discussed, so I await the final “Levelling Up” White Paper with interest.

A strategy does imply some prioritisation, and there are a number of different ways in which one might define priorities. The Coalition Government defined 8 Great Technologies; the 2017 Industrial Strategy was built around “Grand Challenges” and “Sector Deals” covering industrial sectors such as Automotive and Aerospace. The current Innovation Strategy introduces seven “technology families” and a new “Innovation Missions Programme”.

It’s interesting to compare the new “seven technology families” with the old “eight great technologies”. For some the carry over is fairly direct, albeit with some wording changes reflecting shifting fashions – robotics and autonomous systems becomes robotics and smart machines, energy and its storage becomes energy and environment technologies, advanced materials and nanotechnology becomes advanced materials and manufacturing, synthetic biology becomes engineering biology. At least two of the original 8 Great Technologies always looked more like industry sectors than technologies – satellites and commercial applications of space, and agri-science. Big data and energy-efficient computing has evolved into AI, digital and advanced computing, reflecting a genuine change in the technology landscape. Regenerative medicine looks like it’s out of favour, replaced in the biomedical area by bioinformatics and genomics. Quantum technology became appended to the “8 great” a year or two later, and this is now expanded to electronics, photonics and quantum.

Interesting thought the shifts in emphasis may be, the key issue is the degree to which these high level priorities are translated into different outcomes in institutions and funding programmes. How, for example, are these priority technology families reflected in advisory structures at the level of UKRI and the research councils? And, most uncomfortable of all, a decision to emphasise some technology families must imply, if it has any real force, a corresponding decision to de-emphasise some others.

One suspects that organisation through industrial sectors is out of favour in the new world where HM Treasury is in the driving seat; for HMT a focus on sectors is associated with incumbency bias, with newer fast-growing industries systematically under-represented, and producer capture of relevant government departments and agencies, leading to a degree of policy attention that reflects a sector’s lobbying effectiveness rather than its importance to the economy.

Despite this colder new environment, the ever opportunistic biomedical establishment has managed to rebrand their sector deal as a “Life Sciences Vision”. The sector lens remains important, though, because industrial sectors do face their own individual issues, all the more so at a time of rapid change. Successfully negotiating the transition to electric vehicles represents an existential challenge to the automotive sector, while for the persistently undervalued chemicals sector, withdrawal from the EU regulatory framework – REACH – threatens substantial extra costs and frictions, while the transition to net zero presents both a challenge for this energy intensive industry, and a huge set of new potential markets as the supply chain for new clean-tech industries like batteries is developed.

One very salutary clarification has emerged as a side-effect of the pandemic. The vaccination programme can be held up as a successful exemplar of an “innovation mission”. This emphasises that a “mission” shouldn’t just be a vague aspiration, but a specific engineering project with a product at the end of it – with a matching social infrastructure developed to ensure that the technology is implemented to deliver the desired societal outcome. Thought of this way, a mission can’t just be about discovery science – it may need the development of new manufacturing capacity, new ICT systems, repurposing of existing infrastructures. Above all, a mission needs to be executed with speed, decisiveness, and a willingness to spend money in more than homeopathic quantities, characteristics that aren’t strongly associated with recent UK administrations.

What further innovation missions can we expect? It isn’t characterised in these terms, but the project to build a prototype power fusion reactor – the “Spherical Tokamak for Energy Production” – could be thought as another one. By no means guaranteed to succeed, it would be a significant development if it did work, and in the meantime it probably will support the spinning out of a number of potentially important technologies for other applications, such as new materials for extreme environments, and further developments in robotics.

Who will define future “innovation missions”? The answer seems to be the new National Science and Technology Council, to be chaired by the Prime Minister and run by the government’s Chief Scientific Advisor, Sir Patrick Vallance, given an expanded role and an extra job title – National Technology Adviser. In the words of the Prime Minister, “It will be the job of the new National Science and Technology Council to signal the challenges – perhaps even to specify the breakthroughs required – and we hope that science, both public and commercial, will respond.”

But here there’s a lot to fill in terms of the mechanisms of how this will work. How will the NSTC make its decisions – who will be informing those discussions? And how will those decisions be transmitted to the wider innovation ecosystem – government departments and their delivery agencies like UKRI, and its component research councils and innovation agency InnovateUK? There is a new system emerging here, but the way it will be wired is as yet far from clear.

Fighting Climate Change with Food Science

The false claim that US President Biden’s Climate Change Plan would lead to hamburger rationing has provided a predictably useful attack line for his opponents. But underlying this further manifestation of the polarisation of US politics, there is a real issue – producing the food we eat does produce substantial greenhouse gas emissions, and a disproportionate amount of these emissions come from eating the meat of ruminants like cattle and sheep.

According to a recent study, US emissions from the food system amount to 5 kg a person a day, and 47% of this comes from red meat. Halving the consumption of animal products by would reduce the USA’s greenhouse gas emissions by about 200 million tonnes of CO2 equivalent, a bit more than 3% of the total value. In the UK, the official Climate Change Committee recommends that red meat consumption should fall by 20% by 2050, as part of the trajectory towards net zero greenhouse gas emissions by 2050, with a 50% decrease necessary if progress isn’t fast enough in other areas. At the upper end of the range possibilities, a complete global adoption of completely animal-free – vegan – diets has been estimated to reduce total global greenhouse gas emissions by 14%.

The political reaction to the false story about Biden’s climate change plan illustrates why a global adoption of veganism isn’t likely to happen any time soon, whatever its climate and other advantages might be. But we should be trying to reduce meat consumption, and it’s worth asking whether the development of better meat substitutes might be part of the solution. We are already seeing “plant-based” burgers in the supermarkets and fast food outlets, while more futuristically there is excitement about using tissue culture techniques to produce in vitro, artificial or lab-grown meat. Is it possible that we can use technology to keep the pleasure of eating meat while avoiding its downsides?

I think that simulated meat has huge potential – but that this is more likely to come from the evolution of the currently relatively low-tech meat substitutes rather than the development of complex tissue engineering approaches to cultured meat [1]. As always, economics is going to determine the difference between what’s possible in principle and what is actually likely to happen. But I wonder whether relatively small investments in the food science of making meat substitutes could yield real dividends.

Why is eating meat important to people? It’s worth distinguishing three reasons. Firstly, meat does provide an excellent source of nutrients (though with potential adverse health effects if eaten to excess). Secondly, It’s a source of sensual pleasure, with a huge accumulated store of knowledge and technique about how to process and cook it to produce the most delicious results. Finally, eating meat is freighted with cultural, religious and historical significance. What kind of meat one’s community eats (or indeed, if it it eats meat at all), when families eat or don’t eat particular meats, all of these have deep historical roots. In many societies access to abundant meat is a potent signifier of prosperity and success, both at the personal and national level. It’s these factors that make calls for people to change their diets so political sensitive to this day.

So how is it realistic to imagine replacing meat with a synthetic substitute? The first issue is easy – replacing meat with foods of plant origin of equivalent nutritional quality is straightforward. The third issue is much harder – cultural change is difficult, and some obvious ways of eliminating meat run into cultural problems. A well-known vegetarian cookbook of my youth was called “Not just a load of old lentils” – this was a telling, but not entirely successful attempt to counteract an unhelpful stereotype head-on. So perhaps the focus should be on the second issue. If we can produce convincing simulations of meat that satisfy the sensual aspects and fit into the overall cultural preconceptions of what a “proper” meal looks like – in the USA or the UK, burger and fries, or a roast rib of beef – maybe we can meet the cultural issue halfway.

So what is meat, and how can we reproduce it? Lean meat consists of about 75% water, 20% protein and 3% fat. If it was just a question of reproducing the components, synthetic meat would be easy. An appropriate mixture of, say, wheat protein and pea protein (a mixture is needed to get all the necessary amino acids), some vegetable oil, and some trace minerals and vitamins, dispersed in water would provide all the nutrition that meat does. This would be fairly tasteless, of course – but given the well developed modern science of artificial flavours and aromas, we could fairly easily reproduce a convincing meaty broth.

But this, of course, misses out the vital importance of texture. Meat has a complex, hierarchical structure, and the experience of eating it reflects the way that structure is broken down in the mouth and the time profile of the flavours and textures it releases. Meat is made from animal muscle tissue, which develops to best serve what that particular muscle needs to do for the animal in its life. The cells in muscle are elongated to make fibres; the fibres bundle together to create the grain that’s familiar when we cut meat, but they also need to incorporate the connective tissue that allows the muscle to exert forces on the animal’s bones, and the blood-carrying vascular system that conveys oxygen and nutrients to the working muscle fibres. All of this influences the properties of the tissue when it becomes meat. The connective tissue is dominated by the protein material collagen, which consists of long molecules tightly bound together in triple helices.

Muscles that do a lot of work – like the lower leg muscles that make up the beef cuts known as shin or leg – have a lot of connective tissue. These cuts of meat are very tough, but after long cooking at low temperatures the collagen breaks down; the triple helices come apart, and the separated long molecules give a silky texture to the gravy, enhanced by the partial reformation of the helical junctions as it cools. In muscles that do less work – like the underside of the loin that forms the fillet in beef – there is much less connective tissue, and the meat is very tender even without long cooking.

High temperature grilling creates meaty flavours through a number of complex chemical reactions known as Maillard reactions, which are enhanced in the presence of carbohydrates in the flour and sugar that are used for barbecue marinades. Other flavours are fat soluble, carried in the fat cells characteristic of meat from well-fed animals that develop “marbling” of fat layers in the lean muscle. All of these characteristics are developed in the animal reflecting the life it leads before slaughter, and are developed further after butchering, storage and cooking.

In “cultured” meat, individual precursor cells derived from an animal are grown in a suitable medium, using a “scaffold” to help the cells organise to form something resembling natural muscle tissue. There a a couple of key technical issues with this. The first is the need to provide the right growth medium for the cells, to provide an energy source, other nutrients, and the growth factors that simulate the chemical communications between cells in whole organisms.

In the cell culture methods that have been developed for biomedical applications, the starting point for these growth media has been sera extracted from animal sources like cows. These are expensive – and obviously can’t produce an animal free product. Serum free growth media have been developed but are expensive, and optimising, scaling up and reducing the cost of these represent key barriers to be overcome to make “cultured meat” viable.

The second issue is reproducing the vasculature of real tissue, the network of capillaries that conveys nutrients to the cells. It’s this that makes it much easier to grow a thin layer of cells than to make a thick, steak-like piece. Hence current proofs of principle of cultured meat are more likely to produce mince meat for burgers rather than whole cuts.

I think there is a more fundamental problem in making the transition from cells, to tissue, to meat. One can make a three dimensional array of cells using a “scaffold” – a network of some kind of biopolymer that the cells can attach to and which guides their growth in the way that a surface does in a thin layer. But we know that the growth of cells is influenced strongly by the mechanical stimuli they are exposed to. This is obvious at the macroscopic scale – muscles that do more work, like leg muscles, grow in a different way that ones that do less – hence the difference between shin of beef and fillet steak. I find it difficult to see how, at scale, one could reproduce these effects in cell culture in a way that produces something that looks more like a textured piece of meat rather than a vaguely meaty mush.

I think there is a simpler approach, which builds on the existing plant-based substitutes for meat already available in the supermarket. Start with a careful study of the hierarchical structures of various meats, at scales from the micron to the millimetre, before and after cooking. Isolate the key factors in the structure that produce a particular hedonic response – e.g. the size and dispersion of the fat particles, and their physical state; the arrangement of protein fibres, the disposition of tougher fibres of connective tissue, the viscoelastic properties of the liquid matrix and so on. Simulate these structures using plant derived materials – proteins, fats, gels with different viscoelastic properties to simulate connective tissue, and appropriate liquid matrices, devising processing routes that use physical processes like gelation and phase separation to yield the right hierarchical structure in a scalable way. Incorporate synthetic flavours and aromas in controlled release systems localised in different parts of the structure. All this is a development and refinement of existing food technology.

At the moment, attempting something like this, we have start-ups like Impossible Burger and Beyond Meat, with new ideas and some distinct intellectual property. There are established food multinationals, like Unilever, moving in with their depth of experience in branding, distribution and deep food science expertise. We already have products, many of which are quite acceptable in the limited market niches they are aiming at (typically minced meat for burgers and sauces). We need to move now to higher value and more sophisticated products, closer to whole cuts of meat. To do this we need some more basic food science research, drawing on the wide academic base in the life sciences, and integrating this with the chemical engineering for making soft matter systems with complex heterogenous structures at scale, often by non-equilibrium self-assembly processes.

Food science is currently rather an unfashionable area, with little funding and few institutions focusing on it (for example, the UK’s former national Institute of Food Research in Norwich has pivoted away from classical food science to study the effect of the microbiome on human health). But I think the case for doing this is compelling. The strong recent rise in veganism and vegetarianism creates a large and growing market. But it does need public investment, because I don’t think intellectual property in this area will be very easy to defend. For this reason, large R&D investments by individual companies alone may be difficult to justify. Instead we need consortia bringing together multinationals like Unilever and players further downstream in the supply chain, like the manufacturers of ready meals and suppliers to fast food outlets, together with a relatively modest increase in public sector applied research. Food science may not be as glamorous as a new approach to nuclear fusion, but maybe turn out to be just as important in the fight against climate change.

[1]. See also this interesting article by Alex Smith and Saloni Shah – The Government Needs an Innovation Policy for Alternative Meats – which makes the case for an industrial strategy for alternative meats, but is more optimistic about the prospects for cell culture than I am.

The Prime Minister’s office asserts control over UK science policy

The Daily Telegraph published a significant article from the Prime Minister about science and technology this morning, to accompany a government announcement “Prime Minister sets out plans to realise and maximise the opportunities of scientific and technological breakthroughs”.

Here are a few key points I’ve taken away from these pieces.

1. There’s a reassertion in the PM’s article of the ambition to raise government spending on science from its current value of £14.9 billion to a new target of £22 bn (though no date is attached to this target), together with recognition that this needs to lever in substantially more private sector R&D spending to meet the overall target of the goal of total R&D spending – public and private – of 2.4% of GDP. The £22bn spending goal was promised in the March 2020 budget, but had since disappeared from HMT documents.

2. But there’s a strong signal that this spending will be directed to support state priorities: “It is also the moment to abandon any notion that Government can be strategically indifferent”.

3. A new committee, chaired by the Prime Minister, will be set up – the National Science and Technology Council. This will establish those state priorities: “signalling the challenges – perhaps even to specify the breakthroughs required”. This could be something like the ministerial committee recommended in the Nurse Review, which it was proposed would coordinate the government’s response to science and technology challenges right across government.

4. There is an expanded role for the Government Chief Scientific Advisor, Sir Patrick Vallance, as National Technology Advisor, in effect leading the National Science and Technology Council.

5. A new Office for Science and Technology Strategy is established to support the NSTC. This is based in the Cabinet Office – emphasising its whole-of-government remit. Presumably this supersedes, and/or incorporates, the existing Government Office of Science, which is now based in BEIS.

6. There is a welcome recognition of some of the current weaknesses of the UK’s science and innovation – the article talks about restoring Britain’s status as a science superpower” (my emphasis), after decades of failure to invest, both by the state and by British industry: “this country has failed for decades to invest enough in scientific research, and that strategic error has been compounded by the decisions of the UK private sector”. The article highlights the UK’s loss of capacity in areas like vaccine manufacture and telecoms.

7. The role of the new funding agency ARIA is defined as looking for “Unknown unknowns”, while NSTC sets out priorities supporting missions like net zero, cyber threats and medical issues like dementia. There is no mention of the UK’s current main funder of upstream research – UKRI – but presumably its role is to direct the more upstream science base to support the missions as defined by NSTC.

8. The role of science and technology in creating economic growth remains important, with an emphasis on scientifically led start-ups and scale-ups, and a reference to “Levelling up” by spreading technology led economic growth outside the Golden Triangle to the whole country.

As always, the effectiveness with which a reorganised structure delivers meaningful results will depend on funding decisions made in the Autumn’s spending review – and thus the degree to which HM Treasury is convinced by the arguments of the NSTC, or compelled by the PM to accept them.

What next for UK Industrial Strategy?

The UK’s industrial strategy landscape was overturned again in the March budget, with the previous strategy (as described in the 2017 White Paper from Greg Clark, Business Minister in the May government, superseded by a Treasury document: “Build back better: our plan for growth”. Is this merely a “rebranding”, or a more substantial repudiation of the very idea of industrial strategy?

From what I can deduce, it is neither of these extremes – instead it reflects some unresolved tension inside government between two views of how industrial policy should be framed. In one view – traditionally associated with HM Treasury – the government should restrict itself to general measures that it is thought will promote productivity growth across the whole economy, resisting any measures that selectively one sector of the economy over another. This is often called “horizontal” industrial policy, in contrast to so-called “vertical” industrial strategy, in which particular sectors of the economy that are thought to be of particular importance are singled out for special support. The 2017 White Paper did signal some return to “vertical” industrial strategy, though we can see recent precursors for this going back to Mandelson’s return to the Department of Business, Innovation and Skills (as BEIS – the department for Business, Energy and Industrial Strategy was called then) in 2008, and in the continuing support for sectors such as aerospace, automotive and life sciences since then. It seems that the Treasury “Plan for Growth” marks a swing of the pendulum back towards a focus on “horizontal” industrial policy, though the signals remain somewhat mixed.

The biggest signal of a change of direction following the March budget was the abolition of the Industrial Strategy Council. This was a non-statutory body set up by BEIS to monitor and provide advice about the implementation of the Industrial Strategy, chaired by the Bank of England’s Chief Economist, Andy Haldane, and featuring a stellar array of economists and business people. The Industrial Strategy Council’s final annual report gives a great outline of what an industrial strategy should be – “a programme of supply-side policies to drive prosperity in and across the economy”, whose key ingredients should be “scale, longevity and policy co-ordination”. The regional dimensions of industrial strategy, they say, should be co-created with businesses and regional actors (as we’ve seen in the development of local industrial strategies). The report calls for the use of clear metrics to judge success by, but to look “beyond these “traditional” drivers of productivity to measures of social, human, and natural capital, as well as broader welfare impacts”. Naturally, the Industrial Strategy Council thinks it’s a good idea to have an independent – and preferably statutory – body to provide independent monitoring and advice. There’s an good summary in this FT article by Andy Haldane – UK industrial strategy is dead, long may it live.

Leaving aside the signals that winding up the Industrial Strategy Council might be sending, what’s the substance in the new Treasury document “Build back better: our plan for growth”?

I don’t find a lot to argue with in the diagnosis of the problems. The UK’s poor productivity performance since the global financial crisis is placed front and centre. A telling graph highlights the growing gap in productivity between the UK and France, Germany and the USA, while another graph (shown below) makes it clear that this isn’t just an abstract issue of economics – the stagnation of wages and living standards the UK has seen since the financial crisis closely tracks the productivity slow-down. The UK’s persistent regional disparities in productivity, about which I’ve written at length in the past, are highlighted, too, with the problem identified (correctly, in my view) as arising from “cities outside London not fully capturing the benefits of their size”. The level of analysis of the causes of these issues is somewhat more sketchy, with the Treasury ascribing the problem primarily to be persistent low investment in physical capital and skills.


A lost decade. UK Labour productivity and real wages since 2000. From HM Treasury’s Build back better: our plan for growth. Open Government License.

The new framework for Treasury industrial strategy is built on three “pillars for growth” – infrastructure, skills and innovation. This is classical “horizontal” industrial strategy, without a focus on any particular sectors. But there are priorities – three goals, each of rather different character.

The first of these is “levelling up” – a (commendable) commitment to “ensure the benefits of growth are spread to all corners of the UK”, tackling regional disparities in health and education outcomes, supporting struggling towns, and ensuring that “every region and nation of the UK [has] at least one globally competitive city, acting as hotbeds of innovation and hubs of high value activity”. The second is the 2050 net zero greenhouse gas target, where the stress is laid on the number of “green jobs” this will produce. The third priority is the post-Brexit one of “taking advantage of the opportunities that come with our new status as a fully sovereign trading nation” – as “Global Britain”.

The plans for building on these three pillars and three priorities remain vague – some existing commitments are reasserted, such as the plan to increase public infrastructure spending, to meet an total R&D spending target of 2.4% of GDP, to deliver the FE White Paper, to introduce the new science funding agency “ARIA”, and to introduce “Freeports”. Further details are promised later, including an Innovation Strategy and the R&D Places Strategy.

The reduction of emphasis in this document on the sectors that were so prominent in the previous industrial strategy – such as aerospace, automotive, and life sciences – has clearly caused some anxiety in business circles. There’s been a response to this, in the shape of a joint letter from BEIS Secretary of State Kwarteng & Chancellor of the Exchequer Sunak. This emphasises continuity with the previous industrial strategy, asserting that the new plan “builds on the best of the Industrial Strategy from 2017 and makes the most of our strengths right across the economy”. They promise that “this government remains committed to its industrial sectors” and that the existing sector deals (e.g. for Aerospace, Automotive and Life Sciences) will be honoured. And there is the promise of more in the future – “we will follow up the plan for growth with an Innovation Strategy, as well as strategies for net zero, hydrogen and space” and “we will also develop a vision for high-growth sectors and technologies.”

There’s another indication that the words “industrial strategy” may not yet be completely unspeakable in the current government – shortly after the budget, the Ministry of Defense published their “Defense and Security Industrial Strategy”. I think this is positive – another way of creating some priorities in industrial strategy without entirely going down the sector route is for the government to focus on strategically important, long-term goals of the state, and systematically to evaluate what innovation is required and what industrial capacity needs to be built to deliver those goals.

What other goals should be pursued besides defense? The obvious two are net zero and healthcare. As a matter of urgency, the government should be developing a long-term Net Zero Industrial Strategy, to accompany a more detailed road-map for the huge job of transforming the UK’s energy economy. And as we recover from the pandemic, there needs to be a refocused Healthcare Industrial Strategy, building on the successes of the old “Life Sciences Strategy” but focusing more on population health, and learning both the positive and negative lessons from the way the UK’s health and life sciences sector responded to the pandemic. The lately departed Industrial Strategy Council produced a very helpful paper on the lessons that industrial strategy should learn from the state’s involvement in the development of the Oxford/AstraZeneca Covid-19 vaccine.

What would worry me most if I were in government is time. “Developing visions” is all well and good, but budgets are now set until 2022, and there are suggestions of funding “pauses” in some parts of the existing industrial strategy, such as the industry research and development supported by the Aerospace Technology Institute. If new programmes are to begun in 2022, they will take time to ramp up. Meanwhile other dates will be creeping up – 2027 is the date for the 2.4% R&D target, which needs the private sector to make decisions to commit substantial extra funds to business R&D in response to any increase in government R&D. And although 2050 seems far away now for the net zero greenhouse gas target, the scale of the transition and the lifetime of the assets, and the need for innovation to bring down the cost of the transition, means that the next ten years is crucial.

Not least, the latest date the government can hold an election is the end of 2024. Having repealed the Fixed Term Parliament Act, the government will probably want to use the regained flexibility to hold the election as much as a year early. There are some who say that this is a government that likes to mark its own homework. Ultimately, though, the homework will be marked by the voters. The government has raised high expectations about a return to economic growth and a levelling up of living standards, especially in the so-called “Red Wall” seats of the Midlands and the North. There’s not a lot of time to demonstrate that the country has even started on that journey, let alone made any substantial progress on it. So whatever the government has decided is the future of industrial strategy, it needs to get on with it.

Novavax – another nanoparticle Covid vaccine

The results for the phase III trial of the Novavax Covid vaccine are now out, and the news seems very good – an overall efficacy of about 90% in the UK trial, with complete protection against severe disease and death. The prospects now look very promising for regulatory approval. What’s striking about this is that we now have a third, completely different class of vaccine that has demonstrated efficacy against COVID-19. We have the mRNA vaccines from BioNTech/Pfizer and Moderna, the viral vector vaccine from Oxford/AstraZeneca, and now Novavax, which is described as “recombinant nanoparticle technology”. As I’ve discussed before (in Nanomedicine comes of age with mRNA vaccines), the Moderna and BioNTech/Pfizer vaccines both crucially depend on a rather sophisticated nanoparticle system that wraps up the mRNA and delivers it to the cell. The Novavax vaccine depends on nanoparticles, too, but it turns out that these are rather different in their character and function to those in the mRNA vaccines – and, to be fair, are somewhat less precisely engineered. So what are these “recombinant nanoparticles”?

All three of these vaccine classes – mRNA, viral vector and Novavax – are based around raising an immune response to a particular protein on the surface of the coronovirus – the so-called “spike” protein, which binds to receptors on the surface of target cells at the start of the process through which the virus makes its entrance. The mRNA vaccines and the viral vector vaccines both hijack the mechanisms of our own cells to get them to produce analogues of these spike proteins in situ. The Novavax vaccine is less subtle – the protein itself is used as the vaccine active ingredient. It’s synthesised in bioreactors by using a genetically engineered insect virus, which is used to infect a culture of cells from a moth caterpillar. The infected cells are harvested and the spike proteins collected and formulated. It’s this stage that, in the UK, will be carried out in the Teeside factory of the contract manufacturer Fujifilm Diosynth Biotechnologies.

The protein used in the vaccine is a slightly tweaked version of the molecule in the coronavirus. The optimal alteration was found by Novavax’s team, led by scientist Nita Patel, who quickly tried out 20 different versions before hitting on the variety that is most stable and immunologically active. The protein has two complications compared to the simplest molecules studied by structural biologists – it’s a glycoprotein, which means that it has short polysaccharide chains attached at various points along the molecule, and it’s a membrane protein (this means that it’s structure has to be determined by cryo-transmission electron microscopy, rather than X-ray diffraction). It has a hydrophobic stalk, which sticks into the middle of the lipid membrane which coats the coronavirus, and an active part, the “spike”, attached to this, sticking out into the water around the virus. For the protein to work as a vaccine, it has to have exactly the same shape as the spike protein has when it’s on the surface of the virus. Moreover, that shape changes when the virus approaches the cell it is going to infect – so for best results the protein in the vaccine needs to look like the spike protein at the moment when it’s armed and ready to invade the cell.

This is where the nanoparticle comes in. The spike protein is formulated with a soap-like molecule called Polysorbate 80 (aka Tween 80). This consists of a hydrocarbon tail – essentially the tail group of oleic acid – attached to a sugar like molecule – sorbitan – to which are attached short chains of ethylene oxide. The whole thing is what’s known as a non-ionic surfactant. It’s like soap, in that it has a hydrophobic tail group and a hydrophilic head group. But unlike soap or comment synthetic detergents, the head group is, although water soluble, uncharged. The net result is that in water Polysorbates-80 self-assembles into nanoscale droplets – micelles – in which the hydrophobic tails are buried in the core and the hydrophilic head groups cover the surface, interacting with the surrounding water. The shape and size of the micelles is set by the length of the tail group and the area of the head group, so for these molecules the optimum shape is a sphere, probably a few tens of nanometers in diameter.

As far as the spike proteins are concerned, these somewhat squishy nanoparticles look a bit like the membrane of the virus, in that they have an oily core that the stalks can be buried in. When the protein, having been harvested from the insect cells and purified, is mixed up with a polysorbate-80 solution, they end up stuck into the sphere like a bunch of whole cloves stuck into a mandarin orange. Typically each nanoparticle will have about 14 spikes. It has to be said that, in contrast to the nanoparticles carrying the mRNA in the BioNTech and Moderna vaccines, neither the component materials nor the process for making the nanoparticles is particularly specialised. Polysorbate-80 is a very widely used, and very cheap, chemical, extensively used as an emulsifier in convenience food and an ingredient in cosmetics, as well as in many other pharmaceutical formulations, and the formation of the nanoparticles probably happens spontaneously on mixing (though I’m sure there are some proprietary twists and tricks to get it to work properly, there usually are).

But the recombinant protein nanoparticles aren’t the only nanoparticles of importance in the Novavax vaccine. It turns out that simply injecting a protein as an antigen doesn’t usually provoke a strong enough immune response to work as a good vaccine. In addition, one needs to use one of the slightly mysterious substances called “adjuvants” – chemicals that, through mechanisms that are probably still not completely understood, prime the body’s immune system and provoke it to make a stronger response. The Novavax vaccine uses as an adjuvant another nanoparticle – a complex of cholesterol and phospholipid (major components of our own cell membranes, widely available commercially) together with molecules called saponins, which are derived from the Chilean soap-bark tree.

Similar systems have been used in other vaccines, both for animal diseases (notably foot and mouth) and human. The Novavax adjuvant technology was developed by a Swedish company, Isconova AB, which was bought by Novavax in 2013, and consists of two separate fractions of Quillaja saponins, separately formulated into 40 nm nanoparticles and mixed together. The Chilean soap-bark tree is commercially cultivated – the raw extract is used, for example, in the making of the traditional US soft drink, root beer – but production will need to be stepped up (and possibly redirected from fizzy drinks to vaccines) if these vaccines turn out to be as successful as it now seems they might.

Sources: This feature article on Novavax in Science is very informative, but I believe the cartoon depicting the nanoparticle isn’t likely to be accurate, depicting it as cylindrical when it is much more likely to be spherical, and based on double tailed lipids rather than the single tailed anionic surfactant that is in fact used in the formulation. This is the most detailed scientific article from the Novavax scientists describing the vaccine and its characterisation. The detailed nanostructure of the vaccine protein in its formulation is described in this recent Science article. The “Matrix-M” adjuvant is described here, while the story of the Chilean soap-bark tree and its products is described in this very nice article in The Atlantic Magazine.

Rubber City Rebels

I’m currently teaching a course on the theory of what makes rubber elastic to Material Science students at Manchester, and this has reminded me of two things. The first is that this a great topic to introduce a number of the most central concepts of polymer physics – the importance of configurational entropy, the universality of the large scale statistical properties of macromolecules, the role of entanglements. The second is that the city of Manchester has played a recurring role of the history of the development of this bit of science, which as always, interacts with technological development in interesting and complex ways.

One of the earliest quantitative studies of the mechanical properties of rubber was published by that great Manchester physicist, James Joule, in 1859. As part of his investigations of the relationship between heat and mechanical work, he measured the temperature change that occurs when rubber is stretched. As anyone can find out for themselves with a simple experiment, rubber is an unusual material in this respect. If you take an elastic band (or, better, a rubber balloon folded into a narrow strip), hold it close to your upper lip, suddenly stretch it and then put it to your lip, you can feel that it significantly heats up – and then, if you release the tension again, it cools down again. This is a crucial observation for understanding how it is that the elasticity of rubber arises from the reduction in entropy that occurs when a randomly coiled polymer strand is stretched.

But this wasn’t the first observation of the effect – Joule himself referred to an 1805 article by John Gough, in the Memoirs of the Manchester Literary and Philosophical Society, drawing attention to this property of natural rubber, and the related property that a strand of the material held under tension would contract on being heated. John Gough himself was a fascinating figure – a Quaker from Kendal, a town on the edge of England’s Lake District, blind, as a result of a childhood illness, he made a living as a mathematics tutor, and was a friend of John Dalton, the Manchester based pioneer of the atomic hypothesis. All of this is a reminder of the intellectual vitality of that time in the fast industrialising provinces, truly an “age of improvement”, while the universities of Oxford and Cambridge had slipped into the torpor of qualifying the dim younger offspring of the upper classes to become Anglican clergymen.

Joule’s experiments were remarkably precise, but there was another important difference from Gough’s pioneering observation. Joule was able to use a much improved version of the raw natural rubber (or caoutchouc) that Gough used; the recently invented process of vulcanisation produced a much stronger, stabler material than the rather gooey natural precursor. The original discovery of the process of vulcanisation was made by the self-taught American inventor Charles Goodyear, who found in 1839 that rubber could be transformed by being heated with sulphur. It wasn’t for nearly another century that the chemical basis of this process was understood – the sulphur creates chemical bridges between the long polymer molecules, forming a covalently bound network. Goodyear’s process was rediscovered – or possibly reverse engineered – by the industrialist Thomas Hancock, who obtained the English patents for it in 1843 [2].

Appropriately for Manchester, the market that Hancock was serving was for improved raincoats. The Scottish industrialist Mackintosh had created his eponymous garment from a waterproof fabric consisting of a sandwich of rubber between two textile sheets; Hancock meanwhile had developed a number of machines and technologies for processing natural rubber, so it was natural for the two to enter into partnership with their Manchester factory making waterproof fabric. Their firm prospered; Goodyear, though, failed to make money from his invention and died in poverty (the Goodyear tire company was named after him, but only some years after his death).

At that time, rubber was a product of the Amazonian rain forest, harvested from wild trees by indigenous people. In a well known story of colonial adventurism, 70,000 seeds of the rubber tree were smuggled out of Brazil by the explorer Henry Wickham, successfully cultivated at Kew Gardens, with the plants exported to the British colonies of Malaya and Ceylon to form the basis of a new plantation rubber industry. This expansion and industrialisation of the cultivation of rubber came at an opportune time – the invention of the pneumatic tyre and the development of the automobile industry led to a huge new demand for rubber around the turn of the century, which the new plantations were in a position to meet.

Wild rubber was also being harvested to meet this time in the Belgian Congo, involving an atrocious level of violent exploitation of the indigenous population by the colonisers. But most of the rubber being produced to meet the new demand came from the British Empire plantations; this cultivation may not have been accompanied by the atrocities committed in the Congo, but the competitive prices plantation rubber could be produced at reflected not just the capital invested and high productivity achieved, but also the barely subsistence wages paid to the workforce, imported from India and China.

Back in England, in 1892 the Birmingham based chemist William Tilden had demonstrated that rubber could be synthesised from turpentine [3]. But this invention created little practical interest in England. And why would it, given that the natural product is of a very high quality, and the British Empire had successfully secured ample supplies through its colonial plantations? The process was rediscovered by the Russian chemist Kondakov in 1901, and taken up by the German chemical company Bayer in time for the synthetic product to play a role in the First World War, when German access to plantation rubber was blocked by the allies. At this time the quality of the synthetic product was much worse than that of natural rubber; nonetheless German efforts to improve synthetic rubber continued in the 1920’s and 30’s, with important consequences in the Second World War.

It’s sobering[4] to realise that by 1919, the rubber industry constituted a global industry with an estimated value of £250 million (perhaps £12 billion in today’s money), on the cusp of a further massive expansion driven by the mass adoption of the automobile – and yet scientists were completely ignorant, not just of the molecular origins of rubber’s elasticity, but even of the very nature of its constituent molecules. It was the German chemist Hermann Staudinger who, in 1920, suggested that rubber was composed of very long, linear molecules – polymers. Obvious thought this may be now, this was a controversial suggestion at the time, creating bitter disputes in the community of German chemists at the time, a dispute that gained a political tinge with the rise of the Nazi regime. Staudinger remained in Germany throughout the Second World War, despite being regarded as deeply ideologically suspect.

Staudinger was right about rubber being made up of long-chain molecules, but he was wrong about the form those molecules would take, believing that they would naturally adopt the form of rigid rods. The Austrian scientist Herman Mark, who was working for the German chemical combine IG Farben on synthetic rubber and other early polymers, realised that these long molecules would be very flexible and take up a random coil conformation. Mark’s father was Jewish, so he left IG Farben, first for Austria, and then after the Anschluss he escaped to Canada. At the University of Vienna in the 1930’s, Mark developed, with Eugene Guth, the statistical theory that explains the elastic behaviour of rubber in terms of the entropy changes in the chains as they are stretched and unstretched. This, at last, provided the basic explanation for the effect Gough discovered more than a century before, and that Joule quantified – the rise of temperature that occurs when rubber is stretched.

By the start of the Second World War, both Mark and Guth found themselves in the USA, where the study of rubber was suddenly to become very strategically important indeed. The entry of Japan into the war and the fall of British Malaya cut off allied supplies of natural rubber, leading to a massive scale up of synthetic rubber production. Somewhat ironically, this was based on the pre-war discovery by IG Farben of a version of synthetic rubber that had a great improvement in properties on previous versions – styrene-butadiene rubber (Buna-S). Standard Oil of New Jersey had an agreement with IG Farben to codevelop and market Buna-S in the USA.

The creation, almost from scratch, of a massive synthetic rubber industry in the USA was, of course, just one dimension of the USA’s World War 2 production miracle, but its scale is still astonishing [5]. The industry scaled up, under government direction, from producing 231 tons of general purpose rubber in 1941, to a monthly output of 70,000 tons in 1945. 51 new plants were built to produce the massive amounts of rubber needed for aircraft, tanks, trucks and warships. The programme was backed up by an intensive R&D effort, involving Mark, Guth, Paul Flory (later to win the Nobel prize for chemistry for his work on polymer science) and many others.

There was no significant synthetic rubber programme in the UK in the 1920’s and 1930’s. The British Empire was at its widest extent, providing ample supplies of natural rubber, as well as new potential markets for the material. That didn’t mean that there was no interest in improving scientific understanding of the material – on the contrary, the rubber producers in Malaya first sponsored research in Cambridge and Imperial, then collectively created a research laboratory in England, led by a young physical chemist from near Manchester, Geoffrey Gee. Gee, together with Leslie Treloar, applied the new understanding of polymer physics to understand and control the properties of natural rubber. After the war, realising that synthetic rubber was no longer just an inferior substitute, but a major threat to the markets for natural rubber, Gee introduced a programme of standardisation of rubber grades which helped the natural product maintain its market position.

Gee moved to the University of Manchester in 1953, and some time later Treloar moved to the neighbouring institution, UMIST, where he wrote the classic textbook on rubber elasticity. Manchester in the 1950’s and 60’s was a centre of research into rubber and networks of all kinds. Perhaps the most significant new developments were made in theory, by Sam Edwards, who joined Manchester’s physics department in 1958. Edwards was a brilliant theoretical physicist, who had learnt the techniques of quantum field theory with Julian Schwinger in a postdoc at Harvard. Edwards, having been interested by Gee in the fundamental problems of polymer physics, realised that there are some deep analogies between the mathematics of polymer chains and the quantum mechanical description of the behaviour of electrons. He was able to rederive, in a much more rigorous way that demonstrated the universality of the results, some of the fundamental predictions of polymer physics that had been postulated by Flory, Mark, Guth and others, before going onto results of his own of great originality and importance.

Edwards’s biggest contribution to the theory of rubber elasticity was to introduce methods for dealing with the topological constraints that occur in dense, cross-linked systems of linear chains. Polymer chains are physical objects that can’t cross each other, something that the classical theories of Guth and Mark completely neglect. But it was by then obvious that the entanglements of polymer molecules could themselves behave as cross-links, even in the absence of the chemical cross linking of vulcanisation (in fact, this is already suggested looking back at Gough’s original 1805 observations, which were made on raw, unvulcanised, rubber). Edwards introduced the idea of a “tube” to represent those topological constraints. Combined with the insight of the French physicist Pierre-Gilles de Gennes, this led not just to improved models for rubber elasticity taking account of entanglements, but a complete molecular theory of the complex viscoelastic behaviour of polymer melts [6].

Another leading physicist who emerged from this Manchester school was Julia Higgins, who learnt about polymers while she was a research fellow in the chemistry department in the 1960’s. Higgins subsequently worked in Paris, where in 1974 she carried out, with Cotton, des Cloiseux, Benoit and others, what I think might be one of the most important single experiments in polymer science. Using a neutron source to study the scattering from a melt of polymer molecules, some of which were deuterium labelled, they were able to show that even in the dense, entangled environment of a polymer melt, a single polymer chain still behaves as a classical random walk. This is in contrast with the behaviour of polymers in solution, where the chains are expanded by a so-called “excluded volume” interaction – arising from the fact that two segments of a single polymer chain can’t be in the same place at the same time. This result had been anticipated by Flory, in a rather intuitive and non-rigorous way, but it was Edwards who proved this result rigorously.

[1] My apologies for the rather contrived title. No-one calls Manchester “Rubber City” – it is traditionally a city built on cotton. The true Rubber City is, of course, Akron Ohio. Neither can anyone really describe any of the figures I talk about here as “rebels” (with the possible exception of Staudinger, who in his way is rather a heroic figure). But as everyone knows [7], Akron was a centre of music creativity in the mid-to-late 1970s, producing bands such as Devo, Per Ubu, and the Rubber City Rebels, whose eponymous song has remained a persistent earworm for me since the late 1970’s, and from which I’ve taken my title.
[2] And I do mean “English” here, rather than British or UK – it seems that Scotland had its own patent laws then, which, it turns out, influenced the subsequent development of the rubber boot industry.
[3] It’s usually stated that Tilden succeeded in polymerising isoprene, but a more recent reanalysis of the original sample of synthetic rubber has revealed that it is actually poly(2,3-dimethybutadiene) (https://www.sciencedirect.com/science/article/pii/S0032386197000840)
[4] At least, it’s sobering for scientists like me, who tend to overestimate the importance of having a scientific understanding to make a technology work.
[5] See “U.S. Synthetic Rubber Program: National Historic Chemical Landmark” – https://www.acs.org/content/acs/en/education/whatischemistry/landmarks/syntheticrubber.html
[6] de Gennes won the 1991 Nobel Prize for Physics for his work on polymers and liquid crystals. Many people, including me, strongly believed that this prize should have been shared with Sam Edwards. It has to be said that both men, who were friends and collaborators, dealt with this situation with great grace.
[7] “Everyone” here meaning those people (like me) born between 1958 and 1962 who spent too much of their teenage years listening to the John Peel show.

How does the UK rank as a knowledge economy?

Now the UK has withdrawn from the European single market, it will need to rethink its current and potential future position in the world economy. Some helpful context is provided, perhaps, by some statistics summarising the value added from knowledge and technology intensive industries, taken from the latest edition of the USA’s National Science Board Science and Engineering Indicators 2020.

The plot shows the changing share of world value added in a set of knowledge & technology intensive industries, as defined by an OECD industry classification based on R&D intensity. This includes five high R&D intensive industries: aircraft; computer, electronic, and optical products; pharmaceuticals; scientific R&D services; and software publishing. It also includes eight medium-high R&D intensive industries: chemicals (excluding pharmaceuticals); electrical equipment; information technology (IT) services; machinery and equipment; medical and dental instruments; motor vehicles; railroad and other transportation; and weapons. It’s worth noting that, in addition to high value manufacturing sectors, it includes some knowledge intensive services. But it does exclude public knowledge intensive services in education and health care, and, in the private sector, financial services and those business services outside R&D and IT services.

From this plot we can see that the UK is a small but not completely negligible part of world advanced economy. This is perhaps a useful perspective from which to view some of the current talk of world-beating “global Britain”. The big story is the huge rise of China, and in this context, inevitable that the rest of the world’s share of the advanced economy has fallen. But the UK’s fall is larger than competitors (-46%, cf -19% for the USA and -13% for rest of EU).

The absolute share tells us about the UK’s overall relative importance in the world economy, and should be helpful in stressing the need, in developing industrial strategy, for some focus. Another perspective is provided if we normalise the figures by population, which give us a sense of the knowledge intensity of the economy, which might give a pointer to prospects for future productivity growth. The table shows a rank ordered list by country of value added in knowledge & technology intensive industries per head of population in 2002 and 2018. The values for Ireland & possibly Switzerland may be distorted by transfer pricing effects.