As times change, the UK’s R&D landscape needs to change too

I took part in a panel discussion last Thursday at the Royal Society, about the UK’s R&D landscape. The other panelists were Anna Dickinson from the think tank Onward, and Ben Johnson, Policy Advisor at the Department of Science, Innovation and Technology, and our chair was Athene Donald. This is a much expanded and tidied version of my opening remarks.

What is the optimum shape of the research and development landscape for the UK? The interesting and important questions here are:

  • What kind of R&D is being done?
  • In what kind of institution is R&D being done?
  • What kind of people do R&D?
  • Who sets the priorities?
  • Who pays for it?

I’m a physicist, but I want to start with lessons from history and geography.

If there’s one lesson we should learn from history, it’s that the way things are now, isn’t the way they always have been. And we should be curious about different countries arrange their R&D landscapes – not just in the Anglophone countries and our European partners and neighbours that we are so familiar with, but in the East Asian countries that have been so economically successful recently.

The particular form that a nation’s R&D landscape takes arises from a set of political, economic circumstances, influenced by the outcome of ideological arguments that take place both within the science community and in wider society.

I’ve just read Iwan Rhys Morus’s fascinating and engaging book on 19th century Science and Technology: “How the Victorians took us to the Moon”. It’s fitting that the book begins with a discussion of just such an ideological debate – about the future orientation of the Royal Society after the death of Sir Joseph Banks in 1820, at the end of his autocratic – and aristocratic – 41 year rule over the Society. The R&D landscape that emerged from these struggles was the one appropriate for the United Kingdom in the Victorian era – a nation going through an industrial revolution, and acquiring an world empire. That landscape was dominated by men of science (and they were men), who believed, above all, in the idea of progress. They valued self-discipline, self-confidence, precision and systematic thinking, while sharing assumptions about gender, class and race that would no longer be acceptable in today’s world.

Morus argues that many of the attitudes, assumptions and institutions of the science community that led to the great technological advances of the 20th century were laid down in the Victorian period. As someone who received their training in one of those great Victorian institutions – Cambridge’s Cavendish Laboratory, that rings true to me. I vividly remember as a graduate student that the great physicist Sir Sam Edwards had a habit of dismissing some rival theorist with the words “it was all done by Lord Rayleigh”. Lots of it probably was.

But also, learning how to do science in the mid-1980’s, I was just at the tail end of another era – what David Edgerton calls the Warfare State. The UK was a nation in which science had been subservient to the defence needs of two world wars, and a Cold War in which technology was the front line. The state ran a huge defence research establishment, and an associated nuclear complex where the lines between civil nuclear power and the nuclear weapons programme were blurred. This was a corporatist world, in which the boundaries between big, national companies like GEC and ICI and the state were themselves not clear. And there was a lot of R&D being done – in 1980, the UK was one of the most R&D intensive countries in the world.

We live in a very different world now. R&D in the private sector still dominates, but now pretty much half of it is done in the labs of overseas owned multinationals. In a world in which R&D is truly globalised, it doesn’t make a lot of sense to talk about UK plc. There’s much more emphasis on the role of spin-outs and start-ups – venture capital supported companies based on protected intellectual property. This too is globalised – we agonise about how few of these companies, even when they are successful, stay to scale up in the UK rather than moving to Germany or the USA. The big corporate laboratories of the past, where use-inspired basic research co-existed with more applied work, are a shadow of their former selves or gone entirely, eroded by a new focus on shareholder value.

Meanwhile, we have seen UK governments systematically withdraw support from applied research, as Jon Agar’s work has documented. After a couple of decades in which university research had been squeezed, the 2000’s saw a significant increase in support through the research councils, but this came at the cost of continual erosion of public sector research establishments. This has left the research councils in a much more dominant position in the government funding landscape – the fraction of government R&D funding allocated through research councils has increased from about 12% in the mid-1980’s to around 30% now. But the biggest rise in government support for R&D has come through the non-specific subsidy for private sector – the R&D tax credit – whose cost rose from just over £1 billion in 2010 to more than £7 billion in 2019.

These dramatic changes in the R&D landscape that have unfolded between the 1980s and now should be understood in the context of the wider changes in the UK’s political economy over that period, often characterised as the dominance of neoliberalism and globalisation. There has been an insistence on the primacy of market mechanisms, and the full integration of the UK in a global free-trading environment, together with a rejection of any idea of state planning or industrial strategy. The shape of the UK economy changed very dramatically, with a dramatic shrinking of the manufacturing sector, the exacerbation of regional economic imbalances, and a persistent trade deficit with the rest of the world. The rise and fall of North Sea Oil and the development of a bubble in financial services has contributed to these trends.

The world looks very different now. The pandemic taught us that global supply chains can be very fragile in a crisis, while the Ukraine war reminded us that state security still, ultimately, depends on high technology and productive capacity. The slower crisis of climate change continues – we face a wrenching economic transition to move our energy economy to a zero carbon basis, while the already emerging effects of climate disruption will be challenging. In the UK, we have a failing economy, where productivity growth has flat-lined since 2008; the consequences are that wages have stagnated to a degree unprecedented in living memory and public services have deteriorated to politically unacceptable levels.

In place of globalisation, we see a retreat to trading blocks. Industrial strategy has returned to the USA at scale $50 bn from the CHIPS act to rebuild its semiconductor industry, and $370 bn for a green transition. The EU is responding. Of course in East Asia and China industrial strategy never went away.

So the question we face now is whether our R&D landscape is the right one for the times we live in now? I don’t think so. Of course, our values are different from those both of the Victorians and mid-20th century technocrats, and our circumstances are different too. Many of the assumptions of the post-1980’s political settlement are now in question. So how must the landscape evolve?

The new R&D landscape needs to more focused on the pressing problems we face: the net zero transition, the productivity slowdown, poor health outcomes, the security of the state. Here in the Royal Society, I don’t need to make the case for the importance of basic science, exploratory research, the unfettered inquiries of our most creative scientists. But in addition , we need more applied R&D, it needs to be more geographically dispersed, more inclusive. It has to build on the existing strengths of the country – but by itself that is not enough, and we will have to rebuild some of the innovation and manufacturing capacity that we have lost. And I think this manufacturing and innovation capacity is important for basic science too, because it’s this technological capacity that allows us to implement and benefit from the basic science. For example, one can be excited by the opportunities of quantum computing, but to make it work it’s probably going to rely on manufacturing technologies already implemented for semiconductors.

The national R&D landscape we have has evolved as the material conditions and ideological assumptions of the nation have changed, and as those conditions and assumptions change, so must the national R&D landscape change in response.

What the UK should – and should not – do about semiconductors

What should be in the long-delayed UK Semiconductor Strategy? My previous series of three blogposts set out the global context, the UK’s position in the global semiconductor world, some thoughts on the future directions of the industry, and some of the options open to the UK. Here, in summary, is a list of actions I think the UK should – and should not – take.

1. The UK should… (& there’s no excuse not to)

The UK government has committed to spending £700m on an exascale computer. It should specify that processor design should be from a UK design house. After decades of talking about using government procurement to drive innovation, the UK government should give it a try.

Why?
The UK has real competitive strength in processor design, and this sub-sector will become more and more important. AI demands exponentially more computing power, but the end of Moore’s law limits supply of computing power from hardware improvements, so design optimisation for applications like AI becomes more important than ever.

2. The UK should… (though it probably won’t, as it would be expensive, difficult, & ideologically uncomfortable)

The UK government should buy ARM outright from its current owner, SoftBank, and float it on the London Stock Exchange, while retaining a golden share to prevent a subsequent takeover by an overseas company.

Why?
ARM is the only UK-based company with internationally significant scale & reach into global semiconductor ecosystem. It’s the sole anchor company for the UK semiconductor industry. Ownership & control matters; ARM’s current overseas ownership makes it vulnerable to takeover & expatriation.

Why not?
It would cost >£50 bn upfront. Most of this money would be recovered in a subsequent sale, and the government might even make a profit, but some money would be at risk. But it’s worth comparing this with the precedent of the post GFC bank nationalisations, at a similar scale.

3. The UK should not… (& almost certainly not possible in any case)

The UK should not attempt to create a UK based manufacturing capability in leading edge logic chips. This would need to be done by one of the 3 international companies with the necessary technical expertise – TSMC, Intel or Samsung.

Why not?
A single leading edge fab costs >£10’s billions. The UK market isn’t anywhere near big enough to be attractive by itself, and the UK isn’t in a position to compete with the USA & Europe in a $bn’s subsidy race.

Moreover, decades of neglect of semiconductor manufacturing probably means the UK doesn’t, in any case, have the skills to operate a leading edge fab.

4. The UK should not…

The UK should not attempt to create UK based manufacturing capability in legacy logic chips, which are still crucial for industrial, automotive & defence applications. The lesser technical demands of these older technologies mean this would be more feasible than manufacturing leading edge chips.

Why not?
Manufacturing legacy chips is very capital intensive, and new entrants have to compete, in a brutally cyclical world market, with existing plants whose capital costs have already been depreciated. Instead, the UK needs to work with like-minded countries (especially in Europe) to develop secure supply chains.

5. Warrants another look

The UK could secure a position in some niche areas (e.g. compound semiconductors for power electronics, photonics and optoelectronics, printable electronics). Targeted support for R&D, innovation & skills, & seed & scale-up finance could yield regionally significant economic benefits.

6. How did we end up here, and what lessons should we learn?

The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

The UK can’t afford to make the same mistakes with future technology waves. We need a properly resourced, industrial strategy applied consistently over decades, growing & supporting UK owned, controlled & domiciled innovative-intensive firms at scale.

What should the UK do about semiconductors? (PDF version)

In anticipation of the UK government’s promised semiconductor strategy, my last three posts have summarised the global state of the industry, the UK’s position in that industry, and suggested what, realistically, the UK’s options are for a semiconductor strategy.

Here are links to all three parts, and for convenience a PDF version of the whole piece.

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry
Part 3: towards a UK Semiconductor Strategy.

PDF version (1 MB):
What should the UK do about semiconductors?

What should the UK do about semiconductors? Part 3: towards a UK Semiconductor Strategy

We are currently waiting for the UK government to publish its semiconductor strategy. As context for such a strategy, my previous two blogposts have summarised the global state of the industry:

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry

Here I consider what a realistic and useful UK semiconductor strategy might include.

To summarise the global context, the essential nations in advanced semiconductor manufacturing are Taiwan, Korea and the USA for making the chips themselves. In addition, Japan and the Netherlands are vital for crucial elements of the supply chain, particularly the equipment needed to make chips. China has been devoting significant resource to develop its own semiconductor industry – as a result, it is strong in all but the most advanced technologies for chip manufacture, but is vulnerable to being cut off from crucial elements of the supply chain.

The technology of chip manufacture is approaching maturity; the very rapid rates of increase in computing power we saw in the 1980s and 1990s, associated with a combination of Moore’s law and Dennard scaling, have significantly slowed. At the technology frontier we are seeing diminishing returns from the ever larger investments in capital and R&D that are needed to maintain advances. Further improvements in computer performance are likely to put more premium on custom designs for chips optimised for specific applications.

The UK’s position in semiconductor manufacturing is marginal in a global perspective, and not a relative strength in the context of the overall UK economy. There is actually a slightly stronger position in the wider supply chain than in chip manufacture itself, but the most significant strength is not in manufacture, but design, with ARM having a globally significant position and newcomers like Graphcore showing promise.

The history of the global semiconductor industry is a history of major government interventions coupled with very large private sector R&D spending, the latter driven by dramatically increasing sales. The UK essentially opted out of the race in the 1980’s, since when Korea and Taiwan have established globally leading positions, and China has become a fast expanding new entrant to the industry.

The more difficult geopolitical environment has led to a return of industrial strategy on a huge scale, led by the USA’s CHIPS Act, which appropriates more than $50 billion over 5 years to reestablish its global leadership, including $39 billion on direct subsidies for manufacturing.

How should the UK respond? What I’m talking about here is the core business of manufacturing semiconductor devices and the surrounding supply chain, rather than information and communication technology more widely. First, though, let’s be clear about what the goals of a UK semiconductor strategy could be.

What is a semiconductor strategy for?

A national strategy for semiconductors could have multiple goals. The UK Science and Technology Framework identifies semiconductors as one of five critical technologies, judged against criteria including their foundational character, market potential, as well as their importance for other national priorities, including national security.

It might be helpful to distinguish two slightly different goals for the semiconductor strategy. The first is the question of security, in the broadest sense, prompted by the supply problems that emerged in the pandemic, and heightened by the growing realisation of the importance and vulnerability of Taiwan in the global semiconductor industry. Here the questions to ask are, what industries are at risk from further disruptions? What are the national security issues that would arise from interruptions in supply?

The government’s latest refresh of its integrated foreign and defence strategy promises to “ensure the UK has a clear route to assured access for each [critical technology], a strong voice in influencing their development and use internationally, a managed approach to supply chain risks, and a plan to protect our advantage as we build it.” It reasserts as a model introduced in the previous Integrated Review the “own, collaborate, access” framework.

This framework is a welcome recognition of the the fact that the UK is a medium size country which can’t do everything, and in order to have access to the technology it needs, it must in some cases collaborate with friendly nations, and in others access technology through open global markets. But it’s worth asking what exactly is meant by “own”. This is defined in the Integrated Review thus: “Own: where the UK has leadership and ownership of new developments, from discovery to large-scale manufacture and commercialisation.”

In what sense does the nation ever own a technology? There are still a few cases where wholly state owned organisations retain both a practical and legal monopoly on a particular technology – nuclear weapons remain the most obvious example. But technologies are largely controlled by private sector companies with a complex, and often global ownership structure. We might think that the technologies of semiconductor integrated circuit design that ARM developed are British, because the company is based in Cambridge. But it’s owned by a Japanese investment bank, who have a great deal of latitude in what they do with it.

Perhaps it is more helpful to talk about control than ownership. The UK state retains a certain amount of control of technologies owned by companies with a substantial UK presence – it has been able in effect to block the purchase of the Newport Wafer Fab by the Chinese owned company Nexperia. But this new assertiveness is a very recent phenomenon; until very recently UK governments have been entirely relaxed about the acquisition of technology companies by overseas companies. Indeed, in 2016 ARM’s acquisition by Softbank was welcomed by the then PM, Theresa May, as being in the UK’s national interest, and a vote of confidence in post-Brexit Britain. The government has taken new powers to block acquisitions of companies through the National Security and Investment Act 2021, but this can only be done on grounds of national security.

The second goal of a semiconductor strategy is as part of an effort to overcome the UK’s persistent stagnation of economic productivity, to “generate innovation-led economic growth” , in the words of a recent Government response to a BEIS Select Committee report. As I have written about at length, the UK’s productivity problem is serious and persistent, so there’s certainly a need to identify and support high value sectors with the potential for growth. There is a regional dimension here, recognised in the government’s aspiration for the strategy to create “high paying jobs throughout the UK”. So it would be entirely appropriate for a strategy to support the existing cluster in the Southwest around Bristol and into South Wales, as well as to create new clusters where there are strengths in related industry sectors

The economies of Taiwan and Korea have been transformed by their very effective deployment of an active industrial strategy to take advantage of an industry at a time of rapid technological progress and expanding markets. There are two questions for the UK now. Has the UK state (and the wider economic consensus in the country) overcome its ideological aversion to active industrial strategy on the East Asian model to intervene at the necessary scale? And, would such an intervention be timely, given where semiconductors are in the technology cycle? Or, to put it more provocatively, has the UK left it too late to capture a significant share of a technology that is approaching maturity?

What, realistically, can the UK do about semiconductors?

What interventions are possible for the UK government in devising a semiconductor strategy that addresses these two goals – of increasing the UK’s economic and military security by reducing its vulnerability to shocks in the global semiconductor supply chain, and of improving the UK’s economic performance by driving innovation-led economic growth? There is a menu of options, and what the government chooses will depend on its appetite for spending money, its willingness to take assets onto its balance sheet, and how much it is prepared to intervene in the market.

Could the UK establish the manufacturing of leading edge silicon chips? This seems implausible. This is the most sophisticated manufacturing process in the world, enormously capital intensive and drawing on a huge amount of proprietary and tacit knowledge. The only way it could happen is if one of the three companies currently at or close to the technology frontier – Samsung, Intel or TSMC – could be enticed to establish a manufacturing plant in the UK. What would be in it for them? The UK doesn’t have a big market, it has a labour market that is high cost, yet lacking in the necessary skills, so its only chance would be to advance large direct subsidies.

In any case, the attention of these companies is elsewhere. TSMC is building a new plant in Arizona, at a cost of $40 billion, while Samsung’s new plant in Texas is costing $25 billion, with the US government using some of the CHIPS act money to subsidise these investments. Despite Intel’s well-reported difficulties, it is planning significant investment in Europe, supported by inducements from EU and its member states under the EU Chips act. Intel has committed €12 billion to expanding its operations in Ireland and €17 billion for a new fab in the existing semiconductor cluster in Saxony, Germany.

From the point of view of security of supply, it’s not just chips from the leading edge that are important; for many applications, in automobiles, defence and industrial machinery, legacy chips produced by processes that are no longer at the leading edge are sufficient. In principle establishing manufacturing facilities for such legacy chips would be less challenging than attempting to establish manufacturing at the leading edge. However, here, the economics of establishing new manufacturing facilities is very difficult. The cost of producing chips is dominated by the need to amortise the very large capital cost of setting up a fab, but a new plant would be in competition with long-established plants whose capital cost is already fully depreciated. These legacy chips are a commodity product.

So in practise, our security of supply can only be assured by reliance on friendly countries. It would have been helpful if the UK had been able to participate in the development of a European strategy to secure semiconductor supply chains, as Hermann Hauser has argued for. But what does the UK have to contribute, in the creation of more resilient supply chains more localised in networks of reliably friendly countries?

The UK’s key asset is its position in chip design, with ARM as the anchor firm. But, as a firm based on intellectual property rather than the big capital investments of fabs and factories, ARM is potentially footloose, and as we’ve seen, it isn’t British by ownership. Rather it is owned and controlled by a Japanese conglomerate, which needs to sell it to raise money, and will seek to achieve the highest return from such a sale. After the proposed sale to Nvidia was blocked, the likely outcome now is a floatation on the US stock market, where the typical valuations of tech companies are higher than they are in the UK.

The UK state could seek to maintain control over ARM by the device of a “Golden Share”, as it currently does with Rolls-Royce and BAE Systems. I’m not sure what the mechanism for this would be – I would imagine that the only surefire way of doing this would be for the UK government to buy ARM outright from Softbank in an agreed sale, and then subsequently float it itself with the golden share in place. I don’t suppose this would be cheap – the agreed price for the thwarted Nvidia take over was $66 billion. The UK government would then attempt to recoup as much of the purchase price as possible through a subsequent floatation, but the presence of the golden share would presumably reduce the market value of the remaining shares. Still, the UK government did spend £46 billion nationalising a bank.

What other levers does the UK have to consolidate its position in chip design? Intelligent use of government purchasing power is often cited as an ingredient of a successful industrial policy, and here there is an opportunity. The government made the welcome announcement in the Spring Budget that it would commit £900 m to build an exascale computer to create a sovereign capability in artificial intelligence. The procurement process for this facility should be designed to drive innovation in the design, by UK companies, of specialised processing units for AI with lower energy consumption.

A strong public R&D base is a necessary – but not sufficient – condition for an effective industrial strategy in any R&D intensive industry. As a matter of policy, the UK ran down its public sector research effort in mainstream silicon microelectronics, in response to the UK’s overall weak position in the industry. The Engineering and Physical Research Council announces on its website that: “In 2011, EPSRC decided not to support research aimed at miniaturisation of CMOS devices through gate-length reduction, as large non-UK industrial investment in this field meant such research would have been unlikely to have had significant national impact.” I don’t think this was – or is – an unreasonable policy given the realities of the UK’s global position. The UK maintains academic research strength in areas such III-V semiconductors for optoelectronics, 2-d materials such as graphene, and organic semiconductors, to give a few examples.

Given the sophistication of state of the art microelectronic manufacturing technology, for R&D to be relevant and translatable into commercial products it is important that open access facilities are available to allow the prototyping of research devices, and with pilot scale equipment to demonstrate manufacturability and facilitate scale-up. The UK doesn’t have research centres on the scale of Belgium’s IMEC, or Taiwan’s ITRI, and the issue is whether, given the shallowness of the UK’s industry base, there would be a customer base for such a facility. There are a number of university facilities focused on supporting academic researchers in various specialisms – at Glasgow, Manchester, Sheffield and Cambridge, to give some examples. Two centres are associated with the Catapult Network – The National Printable Electronics Centre in Sedgefield, and the Compound Semiconductor Catapult in South Wales.

This existing infrastructure is certainly insufficient to support an ambition to expand the UK’s semiconductor sector. But a decision to enhance this research infrastructure will need a careful and realistic evaluation of what niches the UK could realistically hope to build some presence in, building on areas of existing UK strength, and understanding the scale of investment elsewhere in the world.

To summarise, the UK must recognise that, in semiconductors, it is currently in a relatively weak position. For security of supply, the focus must be on staying close to like-minded countries like our European neighbours. For the UK to develop its own semiconductor industry further, the emphasis must be on finding and developing particular niches where the UK’s does have some existing strength to build on, and there is the prospect of rapidly growing markets. And the UK should look after its one genuine area of strength, in chip design.

Four lessons for industrial strategy

What should the UK do about semiconductors? Another tempting, but unhelpful, answer is “I wouldn’t start from here”. The UK’s current position reflects past choices, so to conclude, perhaps it’s worth drawing some more general lessons about industrial strategy from the history of semiconductors in the UK, and globally.

1. Basic research is not enough

The historian David Edgerton has observed that it is a long-running habit of the UK state to use research policy as a substitute for industrial strategy. Basic research is relatively cheap, compared to the expensive and time-consuming process of developing and implementing new products and processes. In the 1980’s, it became conventional wisdom that governments should not get involved in applied research and development, which should be left to private industry, and, as I recently discussed at length, this has profoundly shaped the UK’s research and development landscape. But excellence in basic research has not produced a competitive semiconductor industry.

The last significant act of government support for the semiconductor industry in the UK was the Alvey programme of the 1980s. The programme was not without some technical successes, but it clearly failed in its strategic goal of keeping the UK semiconductor industry globally competitive. As the official evaluation of the programme concluded in 1991 [1]: “Support for pre-competitive R&D is a necessary but insufficient means for enhancing the competitive performance of the IT industry. The programme was not funded or equipped to deal with the different phases of the innovation process capable of being addressed by government technology policies. If enhanced competitiveness is the goal, either the funding or scope of action should be commensurate, or expectations should be lowered accordingly”.

But the right R&D institutions can be useful; the experience of both Japan and the USA shows the value of industry consortia – but this only works if there is already a strong, R&D intensive industry base. The creation of TSMC shows that it is possible to create a global giant from scratch, and this emphasises the role of translational research centres, like Taiwan’s ITRI and Belgium’s IMEC. But to be effective in creating new businesses, such centres need to have a focus on process improvement and manufacturing, as well as discovery science.

2. Big is beautiful in deep tech.

The modern semiconductor industry is the epitome of “Deep Tech”: hard innovation, usually in the material or biological domains, demanding long term R&D efforts and large capital investments. For all the romance of garage-based start-ups, in a business that demands up-front capital investments in the $10’s of billions and annual research budgets on the scale of medium size nation states, one needs serious, large scale organisations to succeed.

The ownership and control of these organisations does matter. From a national point of view, it is important to have large firms anchored to the territory, whether by ownership or by significant capital investment that would be hard to undo, so ensuring the permanence of such firms is the legitimate business of government. Naturally, big firms often start as fast growing small ones, and the UK should make more effort to hang on to companies as they scale up.

3. Getting the timing right in the technology cycle

Technological progress is uneven – at any given time, one industry may be undergoing very dramatic technological change, while other sectors are relatively stagnant. There may be a moment when the state of technology promises a period of rapid development, and there is a matching market with the potential for fast growth. Firms that have the capacity to invest and exploit such “windows of opportunity”, to use David Sainsbury’s phrase, will be able to generate and capture a high and rising level of added value.

The timing of interventions to support such firms is crucial, and undoubtedly not easy, but history shows us that nations that are able to offer significant levels of strategic support at the right stage can see a material impact on their economic performance. The recent rapid economic growth of Korea and Taiwan is a case in point. These countries have gone beyond catch-up economic growth, to equal or surpass the UK, reflecting their reaching the technological frontier in high value sectors such as semiconductors. Of course, in these countries, there has been a much closer entanglement between the state and firms than UK policy makers are comfortable with.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

4. If you don’t choose sectors, sectors will choose you

In the UK, so-called “vertical” industrial strategy, where explicit choices are made to support specific sectors, have long been out of favour. Making choices between sectors is difficult, and being perceived to have made the wrong choices damages the reputation of individuals and institutions. But even in the absence of an explicitly articulated vertical industrial strategy, policy choices will have the effect of favouring one sector over another.

In the 1990s and 2000s, UK chose oil and gas and financial services over semiconductors, or indeed advanced manufacturing more generally. Our current economic situation reflects, in part, that choice.

[1] Evaluation of the Alvey Programme for Advanced Information Technology. Ken Guy, Luke Georghiou, et al. HMSO for DTI and SERC (1991)

What should the UK do about semiconductors? Part 2: the past and future of the global semiconductor industry

This is the second post in a series of three, considering the background to the forthcoming UK Government Semiconductor Strategy.

In the first part, The UK’s place in the semiconductor world, I discussed the new global environment, in which a tenser geopolitical situation has revived a policy climate around the world which is much more favourable to large scale government interventions in the industry, I sketched the global state of the semiconductor industry and tried to quantify the UK’s position in the semiconductor world.

Here, I discuss the past and future of semiconductors, mentioning some of the important past interventions by governments around the world that have shaped the current situation, and I speculate on where the industry might be going in the future.

Finally, in the third part, I’ll ask where this leaves the UK, and speculate on what its semiconductor strategy might seek to achieve.

Active industrial policy in the history of semiconductors

The history of the global semiconductor industry involves a dance between governments around the world and private companies. In contrast to the conviction of the predominantly libertarian ideology of Silicon Valley, the industry wouldn’t have come into existence and developed in the form we now know without a series of major, and expensive, interventions by governments across the world.

But, to caricature the claims of some on the left, there is an idea that it was governments that created the consumer electronic products we all rely on, and private industry has simply collected the profits. This view doesn’t recognise the massive efforts private industry has made, spending huge sums on the research and development needed to perfect manufacturing processes and bring them to market. Taking the USA alone, in 2022 US the government spent $6 billion on semiconductor R&D, compared to private industry’s $50.2 billion.

The semiconductor industry emerged in the 1960s in the USA, and in its early days more than half of its sales were to the US government. This was an early example of what we would now call “mission driven” innovation, motivated by a “moonshot project”. The “moonshot project” of the 1960s was driven by a very concrete goal – to be able to drop a half-tonne payload anywhere on the earth’s surface, with a precision measured in hundreds of meters.

Semiconductors were vital to achieve this goal – the first mass-produced computers based on integrated circuits were developed as the guidance systems of Minuteman intercontinental ballistic missiles. Of course, despite its military driving force, this “moonshot” produced important spin-offs – the development of space travel to the point at which a series of manned missions to the moon were possible, and increasing civilian applications of the more much cheaper, more powerful and more reliable computers that solid-state electronics made possible.

The USA is where the semiconductor industry started, but it played a central role in three East Asian development miracles. The first to exploit this new technology was Japan. While the USA was exploiting the military possibilities of semiconductors, Japan focused on their application in consumer goods.

By the early 1980’s, though, Japanese companies were producing memory chips more efficiently than the USA, while Nikon took a leading position in the photolithography equipment used to make integrated circuits. In part the Japanese competitive advantage was driven by their companies’ manufacturing prowess and their attentiveness to customer needs, but the US industry complained, not entirely without justification, that their success was built on the theft of intellectual property, access to unfairly cheap capital, the protection of home markets by trade barriers, and government funded research consortia bringing together leading companies. These are recurring ingredients of industrial policy as executed by East Asian developmental states, first executed successfully in Taiwan and in Korea, and now being applied on a continental scale by China.

An increasingly paranoid USA’s response to this threat from Japan to its technological supremacy in semiconductors was to adopt some industrial strategy measures itself. The USA relaxed its stringent anti-trust laws to allow US companies to collaborate in R&D through a consortium called SEMATECH, half funded by the federal government. Sematech was founded in 1987, and in the first 5 years of its operation was supported by $500 m of Federal funding, leading to some new self-confidence for the US semiconductor industry.

Meanwhile both Korea and Taiwan had identified electronics as a key sector through which to pursue their export-focused development strategies. For Taiwan, a crucial institution was the Industrial Technology Research Institute, in Hsinchu. Since its foundation in 1973, ITRI had been instrumental in supporting Taiwan’s industrial base in moving closer to the technology frontier.

In 1985 the US-based semiconductor executive Morris Chang was persuaded to lead ITRI, using this position to create a national semiconductor industry, in the process spinning out the Taiwan Semiconductor Manufacturing Company. TSMC was founded as a pure-play foundry, contract manufacturing integrated circuits designed by others and focusing on optimising manufacturing processes. This approach has been enormously successful, and has led TSMC to its globally leading position.

Over the last decade, China has been aggressively promoting its own semiconductor industry. The 2015 “Made in China 2025” identified semiconductors as a key sector for the development of a high tech manufacturing sector, setting the target of 70% self-sufficiency by 2025, and a dominant position in global markets by 2045.

Cheap capital for developing semiconductor manufacturing was provided through the state-backed National Integrated Circuit Industry Investment Fund, amounting to some $47 bn (though it seems the record of this fund has been marred by corruption allegations). The 2020 directive “Several Policies for Promoting the High-quality Development of the Integrated Circuit Industry and Software Industry in the New Era” reinforced these goals with a package of measures including tax breaks, soft loans, R&D and skills policies.

While the development of the semiconductor industry in Taiwan and Korea was generally welcomed by policy-makers in the West, a changing geopolitical climate has led to much more anxiety about China’s aspirations. The USA has responded by an aggressive programme of bans on the exports of semiconductor manufacturing tools, such as high end lithography equipment, to China, and has persuaded its allies in Japan and the Netherlands to follow suit.

Industrial policy in support of the semiconductor industry hasn’t been restricted to East Asia. In Europe a key element of support has been the development of research institutes bringing together consortia of industries and academia; perhaps the most notable of these is IMEC in Belgium, while the cluster of companies that formed around the electronics company Phillips in Eindhoven now includes the dominant player in equipment for extreme UV lithography, AMSL.

In Ireland, policies in support of inward investment, including both direct and indirect financial inducements, and the development of institutions to support skills innovation, persuaded Intel to base their European operations in Ireland. This has resulted in this small, formerly rural, nation becoming the second largest exporter of integrated circuits in Europe.

In the UK, government support for the semiconductor industry has gone through three stages. In the postwar period, the electronics industry was a central part of the UK’s Cold War “Warfare State”, with government institutions like the Royal Signals and Radar Establishment at Malvern carrying out significant early research in compound semiconductors and optoelectronics.

The second stage saw a more conscious effort to support the industry. In the mid-to-late 1970’s, a realisation of the potential importance of integrated circuits coincided with a more interventionist Labour government. The government, through the National Enterprise Board, took a stake in a start-up making integrated circuits in South Wales, Inmos. The 1979 Conservative government was much less interventionist than its predecessor, but two important interventions were made in the early 1980’s.

The first was the Alvey Programme, a joint government/private sector research programme launched in 1983. This was an ambitious programme of joint industry/government research, worth £350m, covering a number of areas in information and communication technology. The results of this programme were mixed; it played a significant role in the development of mobile telephony, and laid some important foundations for the development of AI and machine learning. In semiconductors, however, the companies it supported, such as GEC and Plessey, were unable to develop a lasting competitive position in semiconductor manufacturing and no longer survive.

The second intervention arose from a public education campaign ran by the BBC; a small Cambridge based microcomputer company, Acorn, won the contract to supply BBC-branded personal computers in support of this programme. The large market created in this way later gave Acorn the headroom to move into the workstation market with reduced instruction set computing architectures, from which was spun-out the microprocessor design house ARM.

In the third stage, the UK government adopted a market fundamentalist position. This involved a withdrawal from government support for applied research and the run-down of government laboratories like RSRE, and a position of studied indifference about the acquisition of UK technology firms by overseas rivals. Major UK electronics companies, such as GEC and Plessey, collapsed following some ill-judged corporate misadventures. Inmos was sold, first to Thorn, then to the Franco- Italian group, SGS Thomson. Inmos left a positive legacy, with many who had worked there going on to participate in a Bristol based cluster of semiconductor design houses. The Inmos manufacturing site survives as Newport Wafer Fab, currently owned by the Dutch-based, Chinese owned company Nexperia, though its future is uncertain following a UK government ruling that Nexperia should divest its shareholding on national security grounds.

This focus on the role of interventions by governments across the world at crucial moments in the development of the industry shouldn’t overshadow the huge investments in R&D made by private companies around the world. A sense of the scale of these investments is given by the figure below.

R&D expenditure in the microelectronics industry, showing Intel’s R&D expenditure, and a broader estimate of world microelectronics R&D including semiconductor companies and equipment manufacturers. Data from the “Are Ideas Getting Harder to Find?” dataset on Chad Jones’s website. Inflation corrected using the US GDP deflator.

The exponential increase in R&D spending up to 2000 was driven by a similarly exponential increase in worldwide semiconductor sales. In this period, there was a remarkable virtuous circle of increasing sales, leading to increasing R&D, leading in turn to very rapid technological developments, driving further sales growth. In the last two decades, however, growth in both sales and in R&D spending has slowed down


Global semiconductor sales in billions of dollars. Plot from “Quantum Computing: Progress and Prospects” (2019), National Academies Press, which uses data from the Semiconductor Industry Association.

Possible futures for the semiconductor industry

The rate of technological progrèss in integrated circuits between 1984 and 2003 was remarkable and unprecedented in the history of technology. This drove an exponential increase in microprocessor computing power, which grew by more than 50% a year. This growth arose from two factors, As is well-known, the number of transistors on a silicon chip grew exponentially, as predicted by Moore’s Law. This was driven by many unsung, but individually remarkable, technological innovations in lithography (to name just a couple of examples, phase shift lithography, and chemically amplified resists), allowing smaller and smaller features to be manufactured.

The second factor is less well known – by a phenomenon known as Dennard scaling, as transistors get smaller they operate faster. Dennard scaling reached its limit around 2004, as the heat generated by microprocessors became a limiting factor. After 2004, microprocessor computer power increased at a slower rate, driven by increasing the number of cores and parallelising operations, resulting in rates of increase around 23% a year. This approach itself ran into diminishing returns after 2011.

Currently we are seeing continued reductions in feature sizes, together with new transistor designs, such as finFETs, which in effect allow more transistors to be fitted into a given area by building them side-on. But further increases in computer power are increasingly being driven by optimising processor architectures for specific tasks, for example graphical processing units and specialised chips for AI, and by simply multiplying the number of microprocessors in the server farms that underlie cloud computing.

Slowing growth in computer power. The growth in processor performance since 1988. Data from figure 1.1 in Computer Architecture: A Quantitative Approach (6th edn) by Hennessy & Patterson.

It’s remarkable that, despite the massive increase in microprocessor performance since the 1970’s, and major innovations in manufacturing technology, the underlying mode of operation of microprocessors remains the same. This is known by the shorthand of CMOS, for Complementary Metal Oxide Semiconductor. Logic gates are constructed from complementary pairs of field effect transistors consisting of a channel in heavily doped silicon, whose conductance is modulated by the application of an electric field across an insulating oxide layer from a metal gate electrode.

CMOS isn’t the only way of making a logic gate, and it’s not obvious that it is the best one. One severe limitation on our computing is its energy consumption. This matters at a micro level; the heat generated by a laptop or mobile phone is very obvious, and it was problems of heat dissipation that underlay the slowdown in the growth in microprocessor power around 2004. It’s also significant at a global level, where the energy used by cloud computing is becoming a significant share of total electricity consumption.

There is a physical lower limit to the energy that computing uses – this is the Landauer limit on the energy cost of a single logical operation, a consequence of the second law of thermodynamics. Our current technology consumes more than three orders of magnitude more energy than is theoretically possible, so there is room for improvement. Somewhere in the universe of technologies that don’t exist, but are physically possible, lies a superior computing technology to CMOS.

Many alternative forms of computing have been tried out in the laboratory. Some involve different materials to silicon: compound semiconductors or new forms of carbon like nanotubes and graphene. In some, the physical embodiment of information is, not electric charge, but spin. The idea of using individual molecules as circuit elements – molecular electronics – has a long and somewhat chequered history. None of these approaches has yet made a significant commercial impact; incumbent technologies are always hard to displace. CMOS and its related technologies amount to a deep nanotechnology implemented at a massive scale; the huge investment in this technology has in effect locked us into a particular technology path.

There are alternative, non-semiconductor based, computing paths that are worth mentioning, because they may become important in the future. One is to copy biology; our own brains deliver enormous computing power at remarkably low energy cost, with an architecture that is very different from the von Neumann architecture that human-built computers follow, and a basic unit that is molecular. Various radical approaches to computing take some inspiration from biology, whether that is the new architectures for CMOS that underlie neuromorphic computing, or entirely molecular approaches based on DNA.

Quantum computing, on the other hand, offers the potential for another exponential leap forward in computing power – in principle. Many practical barriers remain before this potential can be turned into practise, however, and this is a topic for another discussion. Suffice it to say that, on a timescale of a decade or so, quantum computers will not replace conventional computers for anything more than some niche applications, and in any case they are likely to be deployed in tandem with conventional high performance computers, as accelerators for specific tasks, rather than as general purpose computers.

Finally, I should return to the point that semiconductors aren’t just valuable for computing; the field of power electronics is likely to become more and more important as we move to a net zero energy system. We will need a much more distributed and flexible energy grid to accommodate decentralised renewable sources of electricity, and this needs solid-state power electronics capable of handling very high voltages and currents – think of replacing house-size substations by suitcase-size solid-state transformer. Widespread uptake of electric vehicles and the need for widely available rapid charging infrastructures will place further demands on power electronics. Silicon is not suitable for these applications, which require wide-band gap semiconductors such as diamond, silicon carbide and other compound semiconductors.

Sources

Chip War: The Fight for the World’s Most Critical Technology, by Chris Miller, is a great overview of the history of this technology.

Semiconductors in the UK: Searching for a strategy. Geoffrey Owen, Policy Exchange, 2022. Very good on the history of the UK industry.

To Every Thing There is a Season – lessons from the Alvey Programme for Creating an Innovation Ecosystem for Artificial Intelligence, by Luke Georghiou. Reflections on the Alvey Programme by one of the researchers who carried out its official evaluation.

Are Ideas getting hard to find, Bloom, Jones, van Reenan and Webb. American Economic Review (2020). An influential paper on diminishing rates of return on R&D, taking the semiconductor industry as a case study.

Quantum Computing: Progress and Prospects (2019), National Academies Press.

Up next: What should the UK do about semiconductors? Part 3: towards a UK semiconductor strategy

What should the UK do about semiconductors? Part 1: the UK’s place in the semiconductor world

The UK government is currently in the process of writing a new strategy for semiconductors. This is the first of a series of three blogposts setting out the context for this strategy.

In this first part, I discuss the new global environment, in which a tenser geopolitical situation has revived a policy climate around the world which is much more favourable to large scale government interventions in the industry. I’ll sketch the global state of the semiconductor industry and try to quantify the UK’s position in the semiconductor world.

In the second part, I’ll discuss the past and future of semiconductors, mentioning some of the important past interventions by governments around the world that have shaped the current situation, and I’ll speculate on where the industry might be going in the future.

Finally, in the third part, I’ll ask where this leaves the UK, and speculate on what its semiconductor strategy might seek to achieve.

As recent events have shown, the semiconductor industry is one of the most strategically important industries in the world, so it’s going to be very important for the UK government to get its strategy right. But there are more general principles at stake. We’re at a moment when a worldwide consensus behind the ideas of free trade and laissez-faire economics is being rapidly replaced in the major economies of the world by much more interventionist, and assertively nationalist, industrial policies. This isn’t comfortable territory for the British state, so how it responds to this test case will be very telling.

War, Semiconductors and the CHIPS act

It’s been reported that Russia has been dismantling washing machines to extract their integrated circuits, for use in missiles. True or not, this story illustrates two important features of the modern world. Integrated circuits – silicon chips – are now ubiquitous and indispensable for modern living – they’re not just to be found in computers and mobile phones; they’re in automobiles, consumer durables, even toys. And modern precision-guided weapon systems depend on them, so with a European war entering its second year, their strategic importance couldn’t be more obvious.

If demand for integrated circuits and other semiconductors is ubiquitous, we’ve also been reminded that their supply isn’t secure. The pandemic led to severe supply chain disruptions, in turn leading to major losses of production in the global automobile industry. The manufacture of the most technically advanced integrated circuits is concentrated in a single company – TSMC – located in the contested territory of Taiwan. This dependence means that, if the People’s Republic of China invades Taiwan, the consequences to the world economy would be disastrous.

This is the context for the USA’s CHIPS and Science Act – a hugely significant, and expensive, government intervention to rebuild the USA’s manufacturing capacity in the most advanced semiconductors. Underlying this is a serious attempt to restore its own technological supremacy – and specifically, to maintain its technological superiority over China.

This is the return, at scale, of industrial strategy. The primary driving force, as it was in the 1950’s and 60’s, is geopolitics, but the economic and political dimensions are important too, with an emphasis on restoring manufacturing – and the good jobs it provides – to communities that have suffered from deindustrialisation. The Act provides for expenditures, over five years, of $39 billion on incentives to return more semiconductor manufacturing to the USA, $13.2 billion for additional research and development, and $10 billion to create regional innovation hubs in economically lagging parts of the country.

It’s worth stressing what an ideological about-turn this represents. An economic advisor to the first President Bush reputedly said “Potato chips, computer chips, what’s the difference? A hundred dollars of one or a hundred dollars of the other is still a hundred dollars”. This is a marvellously succinct expression of the neoliberal argument against sector-based industrial strategy. It’s now clear how naive this view was. Crisps weren’t about to see the most rapid period of technological progress in history, propelling those countries like Taiwan and Korea that took advantage of this opportunity, from middle income economies, into the ranks of rich countries at the technological frontier. And Frito-Lay doesn’t make missiles.

The European Union has responded with its own European Chips Act. This includes an €11 billion “Chips for Europe Initiative”, together with further coordination of R&D and education and skills initiatives. Most significantly, it proposes a relaxation of state aid rules, allowing member states to directly subsidise new manufacturing facilities in Europe.

How should the UK respond to this new environment? The government is preparing a Semiconductor Strategy, but this has been repeatedly delayed.

The global semiconductor industry

What are the products of the global semiconductor industry? The most high profile are enormously complex integrated circuits that power our personal computers, gaming stations and mobile phones, as well as driving the giant server farms that underly cloud computing. The most important component of modern electronics is the transistor, a solid state switch. A few transistors can be combined to make a logic gate – the basic unit of a computer; the way this is done is described as “complementary metal oxide silicon” – hence CMOS. An integrated circuit combines a number of transistors on a single piece of silicon – a chip. Different designs of integrated circuits produce central processing units (CPUs), graphical processing units (GPUs), and solid state memory.

The more transistors the chip has, the more computing power or the bigger the memory, so the history of microelectronics is a story of miniaturisation, with each generation of chips having more transistors on a single integrated circuit, as expressed by Moore’s law. A modern CPU (such as Apple’s M1, made by TSMC) has 16 billion transistors, each of which has dimensions measured in nanometers. These are made by the most sophisticated and precise manufacturing processes in the world, through the successive deposition of layers of different materials, at each stage etching the layers with patterns that define the components.

Only three companies in the world have the capability to operate at this technological frontier: the USA’s Intel, Korea’s Samsung, and Taiwan’s TSMC. In recent years, progress at Intel has stumbled, and TSMC has taken a commanding lead for the manufacturing the highest performance integrated circuits. TSMC focuses purely on manufacturing, making integrated circuits to the designs of so-called fabless companies, such as Nvidia. Intel, on the other hand, designs its own chips and manufactures them.

The scale of capital investment required to make these advanced circuits is breathtaking. TSMC is reported to have invested $60 billion in its facilities to manufacture chips at the 3 nm and 5 nm nodes. TSMC has been incentivised by the US government to establish production in Arizona, at a cost of $40 bn. These huge capital sums reflect the high cost of the ultra-sophisticated, high precision equipment required to pattern these circuits on the nanoscale. The frontier processes rely on the extreme-UV lithography systems made by the Dutch company ASML, a single unit of which may cost $150 million. Other important centres of equipment production include Japan and the USA.

There is still substantial demand for less advanced integrated circuits, for applications in cars, consumer durables, industrial machinery, weapons systems and much else. In addition to the three industry leaders, companies like Global Foundries, STMicro and NXP operate manufacturing plants in the USA, Europe and Singapore. China’s leading semiconductor company, Semiconductor Manufacturing International Corporation, falls into this category, though it has aspirations to reach the technological frontier, and is supported in this goal by China’s government.

Not all semiconductors are silicon. Other materials – compound semiconductors, such as Gallium Arsenide, and Gallium Nitride – are particularly important for optoelectronics; the business of converting electricity to light and back again. These are the materials from which solid state lasers and light emitting diodes are made ; familiar in everyday life as scanners in supermarkets and low energy light bulbs, but no less importantly the technologies which make the internet possible, converting electronic signals into the optical pulses that transmit information at huge rates through optical fibres.

The primary driving force for innovation in semiconductors has been information and communication technology – the desire for more powerful computers and the higher rates of data transmission that make possible today’s internet. But information processing isn’t the only important use of semiconductors. In power electronics, the focus is on the switching, amplifying and transformation of the much higher currents needed to drive electric motors. These technologies are rapidly growing in importance; the transition to a net zero greenhouse gas energy economy is going to be driven by the replacement of internal combustion engines by electric motors. The growth of electrical vehicles, the growing importance of renewable energy and the need for energy storage, all will drive the need to efficiently handle and transform high power electricity using light and efficient solid state devices.

The UK’s place in the semiconductor world

The UK is not a big player in the global semiconductor industry. Its exports of integrated circuits, worth $1.63 bn, represent 0.24% of the world’s trade; insignificant compared to the world’s leaders, Taiwan, China and Korea, whose exports are worth $138 bn, $120 bn, and $89.1 bn respectively. Outside the Far East, the USA exports $44.2 bn; it’s this relatively weak position relative to the East Asian countries that has prompted the measures of the CHIPS Act. In Europe, the leading exporters are Germany and Ireland, at $12.8 bn, and $11.2 bn respectively.

As mentioned above, the manufacture of integrated circuits is hugely capital intensive, so it’s important to look at the suppliers of the equipment used to make chips. The export trade here is dominated by Japan, the Netherlands and the USA, worth $12 bn, $11.7 bn, and $10.7 bn respectively. The UK has 1.06% of the world market, with exports worth $497m.

One other important component of the supply chain for chip manufacture are the chemicals and materials needed. These include the silicon single crystals from which the wafers are made, amongst the purest substances ever made by man, a wide range of industrial gases and solvents and reagents, all supplied at very high purity grades, and highly optimised speciality chemicals – e.g. the materials that make up the photoresists. This sector is dominated by Japan, with exports worth $4.23 bn worth, representing 29.5% of the world trade. Here the UK exports $212 m, a 1.48% share of the world market.

It’s worth reflecting on these figures in the context of the UK’s overall trade position. The total value of its exports in 2020 were $700 bn, made up of $371 bn in products, and $329 bn in services, so these three semiconductor-related sectors amount to about 6.3% of its total product exports. But as these figures emphasise, service sector exports are particularly important for the UK, and this bigger story is mirrored in the semiconductor sector.

The most significant semiconductor company in the UK doesn’t make any semiconductors – ARM designs chips, deriving its income from royalties and licensing fees for its intellectual property. Its revenues of $2.7 bn in 2021 would have made a significant contribution to the UK’s service exports (2020 UK service exports included $21.3 bn in royalties and license fees). Smaller companies, such as Imagination and Graphcore, are similarly focused on design rather than manufacturing.

In recent years, the question of ownership of ARM has achieved prominence. Originally a public company listed on the London Stock Exchange, ARM was acquired by the Japanese finance house SoftBank in 2016. A proposed sale to the US firm Nvidia collapsed last year after concerns from regulators in the UK, the USA and the EU that the acquisition would seriously reduce competion. SoftBank still remains keen to sell the company, so the future ownership and control of ARM remains in question.

Sources

All trade figures 2020 numbers, from the Observatory of Economic Complexity.

Up next: What should the UK do about semiconductors? Part 2: the past and future of the global semiconductor industry

“Science Superpower: the UK’s Global Science Strategy beyond Horizon Europe”

Last Wednesday the Science Minister, George Freeman MP, gave a wide ranging speech with this title, on the current state of UK science policy at the think-tank Onward. A video of the speech can be watched on YouTube here. As a response to the speech, there was a panel discussion the following day, featuring Prof Sir John Bell, Lord David Willetts, James Phillips, Tabitha Goldstaub, Priya Guha and and myself, chaired by Onward’s Adam Hawksbee. This is also available to watch on YouTube. This, more or less, is what I said in my opening statement.

Hello. I’m Richard Jones, talking to you from Oldham Town Hall – which I think is very on-brand for Onward, and indeed for myself…

I want to start where the Minister finished – what are we talking about, when we talk about being a “Science Superpower”? This is part of that broader question of how the UK finds its place in the world.

The UK represents a little less than 3% of the world’s high tech economy. It’s not the USA, it’s not China. But the UK does have a real potential competitive advantage in the strength of its science base – it is genuinely outperforming, at least (and this qualification is important) when it is judged on purely academic metrics.

The challenge – and this is the “Innovation Nation” aspect that the Minister stresses – is applying that science strength to the critical issues the UK – and the world – faces. These challenges include:

  • The UK’s more than a decade long stagnation in productivity growth;
  • The wrenching economic transition we face to achieve a net zero energy economy;
  • Ensuring good health outcomes for our citizens;
  • National security in an increasingly dangerous world.

To begin with productivity, it can’t be stressed too much how the stagnation of productivity growth after 2008 underlies pretty much all the difficulties the country faces – stagnant wages, the persistent fiscal deficit, the difficulties we’re seeing in funding public services to the standard people expect

As the Minister said, to get economic growth back we need to be accelerating progress in high tech sectors

But there’s a paradox here – the economist Diane Coyle, from the Productivity Institute, has analysed the productivity slowdown, and finds the biggest contributors to the slowdown are precisely those high-tech sectors that we think should be our strength. [Source: Coyle & Mei, Diagnosing the UK Productivity Slowdown: Which Sectors Matter and Why?]

In Pharmaceuticals, productivity growth was 0.6% a year on average between 1998 and 2008. But between 2009-2019 pharma industry productivity actually fell, by 0.2% a year on average.

So, we need to do things differently.

Money is important, and the government’s spending uplift is real, significant in scale, and to be welcomed.

I welcome ARIA as a chance to try and experiment with different funding mechanisms.

But from the perspective of Oldham, the biggest and most welcome change the minister talked about was the new focus on place and clusters across the UK

The UK is two nations – a high performing Northern European economy in the Greater Southeast. And beyond, in the North, The Midlands, Wales – we have places with economies comparable to southern Italy or Portugal. Our big cities – like Birmingham, Greater Manchester and Glasgow – have productivity below the UK average. This isn’t normal – in most developed countries, its the big cities that drive the national economy. Why can’t Manchester be more like Munich, a similar size city, that’s one of Germany’s innovation hubs? If it was, it would generate about £40 billion a year more value for the UK.

This is a huge waste of potential. We need to identify nascent clusters, and work with those places to build up their innovation capacity, build industrial R&D, attract in investment from outside, and give people in places like Oldham the opportunity that the Minister talks about to take part in this high tech economy.

But money isn’t everything. For example, we do health research to support the health of our citizens as well as to create economic value. The Oxford vaccine was a brilliant example of this.

But even pre-pandemic, a man born in Oldham 2016-2018 could expect to live in good health for 58 years. For a man in Oxfordshire, healthy life expectancy was 68.3 years! [Source: Health state life expectancy at birth and at age 65 years by local areas, UK, ONS.]

Ten lost years for Oldhamites! The human cost of those years of ill-health and premature death is huge. But so is the economic cost – this ill-health is a major contributor to the productivity gap in Oldham and places like it, all across the UK

That’s something R&D should do something about – this truly would be “innovation for the nation”.

We have to do things differently. We need to apply our science to address the big strategic problems the UK faces, and we need that to be an effort that the whole nation takes part in – and benefits from.

None of this should take away from the power of great research centres like Cambridge and Oxford – that really is a supercluster, a massive asset for the nation.

The question is, how can we build on that and spread the benefits across the rest of the country? There are plenty of great spin-outs from Cambridge and Oxford. We need them to scale-up in the UK, and not feel they have to move to Germany, or California, to succeed. So why shouldn’t their first factory be in Rochdale or Rotherham, or Dudley or Stoke-on-Trent?

So yes, let’s aspire to be an innovation nation, but to build that, we need innovation cities and innovation regions all across the UK.

For (much) more on this, see my Productivity Institute paper Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape.

2022 Books roundup

2022 was a thoroughly depressing year; here are some of the books I’ve read that have helped me (I hope) to put last year’s world events in some kind of context.

Helen Thompson could not have been luckier – or, perhaps, more farsighted – in the timing of her book’s release. Disorder: hard times in the 21st century is a survey of the continuing influence of fossil fuel energy on geopolitics, so couldn’t be more timely, given the impact of Russia’s invasion of Ukraine on natural gas and oil supplies to Western Europe and beyond. The importance of securing national energy supplies runs through history of the world in the 20th century in both peace and war; we continue to see examples of the deeply grubby political entanglements the need for oil has drawn Western powers into. All this, by the way, provides a strong secondary argument, beyond climate change, for accelerating the transition to low carbon energy sources.

The presence of large reserves of oil in a country isn’t an unmixed blessing – we’re growing more familiar with the idea of a “resource curse”, blighting both the politics and long term economic prospects of countries whose economies depend on exploiting natural resources. Alexander Etkind’s book Natures Evil: a cultural history of natural resources is a deep history of how the materials we rely on shape political economies. It has a Eurasian perspective that is very timely, but less familiar to me, and takes the idea of a resource curse much further back in time, covering furs and peat as well as the more familiar story of oil.

With more attention starting to focus on the world’s other potential geopolitical flashpoint – the Taiwan Straits – Chris Miller’s Chip War: the fight for the world’s most critical technology – is a great explanation of why Taiwan, through the semiconductor company TSMC, came to be so central to the world’s economy. This book – which has rightly won glowing reviews – is a history of the ubiquitous chip – the silicon integrated circuits that make up the memory and microprocessor chips at the heart of computers, mobile phones – and, increasingly, all kinds of other durable goods, including cars. The focus of the book is on business history, but it doesn’t shy away from the crucial technical details – the manufacturing processes and the tools that enable them, notably the development of extreme UV lithography and the rise of the Dutch company ASML. Excellent though the book is, its business focus did make me reflect that (as far as I’m aware) there’s a huge gap in the market for a popular science book explaining how these remarkable technologies all work – and perhaps speculating on what might come next.

Slouching to Utopia: an economic history of the 20th century, by Brad DeLong, is an elegy for a period of unparalleled technological advance and economic growth that seems, in the last decade, to have come to an end. For DeLong, it was the development of the industrial R&D laboratory towards the end of the 19th century that launched a long century, from 1870-2010, of unparalleled growth in material prosperity. The focus is on political economy, rather than the material and technological basis of growth (for the latter, Vaclav Smil’s pair of books Creating the Twentieth Century and Transforming the Twentieth Century are essential). But there is a welcome focus on the material substrate of information and communication technology rather than the more visible world of software (in contrast, for example, to Robert Gordon’s book The Rise and Fall of American Growth, which I reviewed rather critically here).

Though I am very sympathetic to many of the arguments in the book, ultimately it left me somewhat disappointed. Having rightly stressed the importance of industrial R&D as the driver of the technological change, this theme was not really strongly developed, with little discussion of the changing institutional landscape of innovation around the world. I also wish the book had a more rigorous editor – the prose lapses on occasion into self-indulgence and the book would have been better had it been a third shorter.

In contrast, Vaclav Smil’s latest book – How the World Really Works: A Scientist’s Guide to Our Past, Present and Future – clearly had an excellent editor. It’s a very compelling summary of a couple of decades of Smil’s prolific output. It’s not a boast about my own learning to say that I knew pretty much everything in this book before I read it; simply a consequence of having read so many of Smil’s previous, more academic books. The core of Smil’s argument is to stress, through quantification, how much we depend on fossil fuels, for energy, for food (through the Haber-Bosch process), and for the basic materials that underlie our world – ammonia, plastics, concrete and steel. These chapters are great, forceful, data-heavy and succinct, though the chapter on risk is less convincing.

Despite the editor, Smil’s own voice comes through strongly, sceptical, occasionally curmudgeonly, laying out the facts, but prone to occasional outbreaks of scathing judgement (he really dislikes SUVs!). Perhaps he overdoes the pessimism about the speed with which new technology can be introduced, but his message about the scale and the wrenching impact of the transition we need to go through, to move away from our fossil fuel economy, is a vital one.

Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape

A revised and tidied up version of my blogpost series, An Index of Issues in UK Science and Innovation Policy, has now been published as a Productivity Insights Paper under the auspices of The Productivity Institute. My thanks to Bart van Ark for encouraging me to do this, and to Krystyna Rudzki for editing the draft.

Download the PDF here: Science and innovation policy for hard times: an overview of the UK’s Research and Development landscape

Science and innovation policy for hard times

This is the concluding section of my 8-part survey of the issues facing the UK’s science and innovation system, An Index of Issues in UK Science and Innovation Policy.

The earlier sections were:
1. The Strategic Context
2. Some Overarching Questions
3. The Institutional Landscape
4. Science priorities: who decides?
5. UK Research and Innovation
6. UK Government Departmental Research
7. Horizon Europe (and what might replace it) and ARIA

8.1. A “science superpower”? Understanding the UK’s place in the world.

The idea that the UK is a “science superpower” has been a feature of government rhetoric for some time, most recently repeated in the Autumn Statement speech. What might this mean?

If we measure superpower status by the share of world resources devoted to R&D (both public and private) by single countries, there are only two science superpowers today – the USA and China, with a 30% and 24% share of science spending (OECD MSTI figures for 2019 adjusted for purchasing power parity, including all OECD countries plus China, Taiwan, Russia, Singapore, Argentina and Romania). If we take the EU as a single entity, that might add a third, with a 16% share (2019 figure, but excluding UK). The UK’s share is 2.5% – thus a respectable medium size science power, less than Japan (8.2%) and Korea (4.8%), between France (3.1%) and Canada (1.4%).

It’s often argued, though, that the UK achieves better results from a given amount of science investment than other countries. The primary outputs of academic science are scientific papers, and we can make an estimate of a paper’s significance by asking how often it is cited by other papers. So another measure of the UK’s scientific impact – the most flattering to the UK, it turns out – is to ask what fraction of the world’s most highly cited papers originate from the UK.

By this measure, the two leading scientific superpowers are, once again, the USA and China, with 32% and 24% shares respectively; on this measure the EU collectively, at 29%, does better than China. The UK scores well by this measure, at 13.4%, doing substantially better than higher spending countries like Japan (3.1%) and Korea (2.7%).

A strong science enterprise – however measured – doesn’t necessarily by itself translate into wider kinds of national and state power. Before taking the “science superpower” rhetoric serious we need to ask how these measures of scientific activity and scientific activity translate into other measures of power, hard or soft.

Even though measuring the success of our academic enterprise by its impact on other academics may seem somewhat self-referential, it does have some consequences in supporting the global reputation of the UK’s universities. This attracts overseas students, in turn bringing three benefits: a direct and material economic contribution to the balance of payments, worth £17.6 bn in 2019, a substantial subsidy to the research enterprise itself, and, for those students who stay, a source of talented immigrants who subsequently contribute positively to the economy.

The transnational nature of science is also significant here; having a strong national scientific enterprise provides a connection to this wider international network and strengthens the nation’s ability to benefit from insight and discoveries made elsewhere.

But how effective is the UK at converting its science prowess into hard economic power? One measure of this is the share of world economic value added in knowledge and technology intensive businesses. According to the USA’s NSF, the UK’s share of value added in this set of high productivity manufacturing and services industries that rely on science and technology is 2.6%. We can compare this with the USA (25%), China (25%), and the EU (18%). Other comparator countries include Japan (7.9%), Korea (3.7%) and Canada (1.2%).

Does it make sense to call the UK a science superpower? Both on the input measure of the fraction of the world’s science resources devoted to science, and on the size of the industry base this science underpins, the UK is an order of magnitude smaller than the world leaders. In the historian David Edgerton’s very apt formulation, the UK is a large Canada, not a small USA.

Where the UK does outperform is in the academic impact of its scientific output. This does confer some non-negligible soft power benefits of itself. The question to ask now is whether more can be done to deploy this advantage to address the big challenges the nation now faces.

8.2. The UK can’t do everything

The UK’s current problems are multidimensional and its resources are constrained. With less than 3% of the world’s research and development resources, no matter how effectively these resources are deployed, the UK will have to be selective in the strategic choices it makes about research priorities.

In some areas, the UK may have some special advantages, either because the problems/opportunities are specific to the UK, or because history has given the UK a comparative advantage in a particular area. One example of the former might be the development of technologies for exploiting deep-water floating offshore wind power. In the latter category, I believe the UK does retain an absolute advantage in researching nuclear fusion power.

In other areas, the UK will do best by being part of larger transnational research efforts. At the applied end, these can be in effect led by multinational companies with a significant presence in the UK. Formal inter-governmental collaborations are effective in areas of “big science” – which combine fundamental science goals with large scale technology development. For example, in high energy physics the UK has an important presence in CERN, and in radio astronomy the Square Kilometer Array is based in the UK. Horizon Europe offered the opportunity to take part in trans-European public/private collaborations on a number of different scales, and if the UK isn’t able to associate with Horizon Europe other ways of developing international collaborations will have to be built.

But there will remain areas of technology where the UK has lost so much capability that the prospect of catching up with the world frontier is probably unrealistic. Perhaps the hardware side of CMOS silicon technology is in this category (though significant capability in design remains).

8.3. Some pitfalls of strategic and “mission driven” R&D in the UK

One recently influential approach to defining research priorities links them to large-scale “missions”, connected to significant areas of societal need – for example, adapting to climate change, or ensuring food security. This has been a significant new element in the design of the current EU Horizon Programme (see EU Missions in Horizon Europe).

For this approach to succeed, there needs to be a match between the science policy “missions” and a wider, long term, national strategy. In my view, there also needs to be a connection to the specific and concrete engineering outcomes that are needed to make an impact on wider society.

In the UK, there have been some moves in this direction. The research councils in 2011 collectively defined six major cross-council themes (Digital Economy; Energy; Global Food Security; Global Uncertainties; Lifelong Health and Wellbeing; Living with Environmental Change), and steered research resources into (mostly interdisciplinary) projects in these areas. More recently, UKRI’s Industrial Strategy Challenge Fund was funded from a “National Productivity Investment Fund” introduced in the 2016 Autumn Statement and explicitly linked to the Industrial Strategy.

These previous initiatives illustrate three pitfalls of strategic or “mission driven” R&D policy.

  • The areas of focus may be explicitly attached to a national strategy, but that strategy proves to be too short-lived, and the research programmes it inspires outlive the strategy itself. The Industrial Strategy Challenge Fund was linked to the 2017 Industrial Strategy, but this strategy was scrapped in 2021, despite the fact that the government was still controlled by the same political party.
  • Research priorities may be connected to a lasting national priority, but the areas of focus within that priority are not sufficiently specified. This leads to a research effort that risks being too diffuse, lacking a commitment to a few specific technologies and not sufficiently connected to implementation at scale. In my view, this has probably been the case in too much research in support of low-carbon energy.
  • In the absence of a well-articulated strategy from central government, agencies such as Research Councils and Innovate UK guess what they think the national strategy ought to be, and create programmes in support of that guess. This then risks lacking legitimacy, longevity, and wider join-up across government.

In summary, mission driven science and innovation policy needs to be informed by carefully thought through national strategy that commands wide support, is applied across government, and is sustained over the long-term.

8.4. Getting serious about national strategy

The UK won’t be able to use the strengths of its R&D system to solve its problems unless there is a settled, long-term view about what it wants to achieve. What kind of country does the UK want to be in 2050? How does it see its place in the world? In short, it needs a strategy.

A national strategy needs to cut across a number of areas. There needs to be an industrial strategy, about how the country makes a living in the world, how it ensures the prosperity of its citizens and generates the funds needed to pay for its public services. An energy strategy is needed to navigate the wrenching economic transition that the 2050 Net Zero target implies. As our health and social care system buckles under the short-term aftermath of the pandemic, and faces the long-term challenge of an ageing population, a health and well-being strategy will be needed to define the technological and organisational innovation needed to yield an affordable and humane health and social care system. And, after the lull that followed the end of the cold war, a strategy to ensure national security in an increasingly threatening world must return to prominence.

These strategies need to reflect the real challenges that the UK faces, as outlined in the first part of this series. The goals of industrial strategy must be to restore productivity growth and to address the UK’s regional economic imbalances. Innovation and skills must be a central part of this, and given the condition large parts of the UK find themselves in, there need to be conscious efforts to rebuild innovation and manufacturing capacity in economically lagging regions. There needs to be a focus on increasing the volume of high value exports (both goods and services) that are competitive on world markets. The goal here should be to start to close the balance of payments gap, but in addition international competitive pressure will also bring productivity improvements.

An energy strategy needs to address both the supply and demand side to achieve a net zero system by 2050, and to guarantee security of supply. It needs to take a whole systems view at the outset, and to be discriminating in deciding which aspects of the necessary technologies can be developed in the UK, and which will be sourced externally. Again, the key will be specificity. For example, it is not enough to simply promote hydrogen as a solution to the net zero problem – it’s a question of specifying how it is made, what it is used for, and identifying which technological problems are the ones that the UK is in a good position to focus on and benefit from, whether that might be electrolysis, manufacture of synthetic aviation fuel, or whatever.

A health and well-being strategy needs to clarify the existing conceptual confusion about whether the purpose of a “Life Sciences Strategy” is to create high value products for export, or to improve the delivery of health and social care services to the citizens of the UK. Both are important, and in a well-thought through strategy each can support the other. But they are distinct purposes, and success in one does not necessarily translate to success in the other.

Finally, a security strategy should build on the welcome recognition of the 2021 Integrated Review that UK national security needs to be underpinned by science and technology. The traditional focus of security strategy is on hard power, and this year’s international events remind us that this remains important. But we have also learnt that the resilience of the material base of economy can’t be taken for granted. We need a better understanding of the vulnerabilities of the supply chains for critical goods (including food and essential commodities).

The structure of government leads to a tendency for strategies in each of these areas to be developed independently of each other. But it’s important to understand the way these strategies interact with each other. We won’t have any industry if we don’t have reliable and affordable low carbon energy sources. Places can’t improve their economic performance if large fractions of their citizens can’t take part in the labour market due to long-term ill-health. Strategic investments in the defence industry can have much wider economic spillover benefits.

For this reason it is not enough for individual strategies to be left to individual government departments. Nor is our highly centralised, London-based government in a position to understand the specific needs and opportunities to be found in different parts of the country – there needs to be more involvement of devolved nation and city-region governments. The strategy needs to be truly national.

8.5. Being prepared for the unexpected

Not all science should be driven by a mission-driven strategy. It is important to maintain the health of the basic disciplines, because this provides resilience in the face of unwelcome surprises. In 2019, we didn’t realise how important it would be to have some epidemiologists to turn to. Continuing support for the core disciplines of physical, biological and medical science, engineering, social science and the humanities should remain a core mission of the research councils, the strength of our universities is something we should preserve and be proud of, and their role in training the researchers of the future will remain central.

Science and innovation policy also needs to be able to create the conditions that produce welcome surprises, and then exploit them. We do need to be able to experiment in funding mechanisms and in institutional forms. We need to support creative and driven individuals, and to recognise the new opportunities that new discoveries anywhere in the world might offer. We do need to be flexible in finding ways to translate new discoveries into implemented engineering solutions, into systems that work in the world. This spirit of experimentation could be at the heart of the new agency ARIA, while the rest of the system should be flexible enough to adapt and scale up any new ways of working that emerge from these experiments.

8.7 Building a national strategy that endures

A national strategy of the kind I called for above isn’t something that can be designed by the research community; it needs a much wider range of perspectives if, as is necessary, it’s going to be supported by a wide consensus across the political system and wider society. But innovation will play a key role in overcoming our difficulties, so there needs to be some structure to make sure insights from the R&D system are central to the formulation and execution of this strategy.

The new National Science and Technology Council, supported by the Office for Science and Technology Strategy, could play an important role here. Its position at the heart of government could give it the necessary weight to coordinate activities across all government departments. It would be a positive step if there was a cross-party commitment to keep this body at the heart of government; it was unfortunate that with the Prime Ministerial changes over the summer and autumn the body was downgraded and subsequently restored. To work effectively its relationships with the Government Office for Science, the Council for Science and Technology need to be clarified.

UKRI should be able to act as an important two-way conduit between the research and development community and the National Science and Technology Council. It should be a powerful mechanism for conveying the latest insights and results from science and technology to inform the development of national strategy. In turn, its own priorities for the research it supports should be driven by that national strategy. To fulfil this function, UKRI will be have to develop the strategic coherence that the Grant Review has found to be currently lacking.

The 2017 Industrial Strategy introduced the Industrial Strategy Council as an advisory body; this was abruptly wound up in 2021. There is a proposal to reconstitute the Industrial Strategy Council as a statutory body, with a similar status, official but independent of government, to the Office of Budgetary Responsibility or the Climate Change Committee. This would be a positive way of subjecting policy to a degree of independent scrutiny, holding the government of the day to account, and ensuring some of the continuity that has been lacking in recent years.

8.8 A science and innovation system for hard times

Internationally, the last few years have seen a jolting series of shocks to the optimism that had set in after the end of the cold war. We’ve had a worldwide pandemic, there’s an ongoing war in Europe involving a nuclear armed state, we’ve seen demonstrations of the fragility of global supply chains, while the effects of climate change are becoming ever more obvious.

The economic statistics show decreasing rates of productivity growth in all developed countries; there’s a sense of the worldwide innovation system beginning to stall. And yet one can’t fail to be excited by rapid progress in many areas of technology; in artificial intelligence, in the rapid development and deployment of mRNA vaccines, in the promise of new quantum technologies, to give just a few examples. The promise of new technology remains, yet the connection to the economic growth and rising living standards that we came to take for granted in the post-war period seems to be broken.

The UK demonstrates this contrast acutely. Despite some real strengths in its R&D system, its economic performance has fallen well behind key comparator nations. Shortcomings in its infrastructure and its healthcare system are all too obvious, while its energy security looks more precarious than for many years. There are profound disparities in regional economic performance, which hold back the whole country.

If there was ever a time when we could think of science as being an ornament to a prosperous society, those times have passed. Instead, we need to think of science and technology as the means by which our society becomes more prosperous and secure – and adapt our science and technology system so it is best able to achieve that goal.