Should Cambridge double in size?

The UK’s economic geography, outside London, is marked by small, prosperous cities in the south and east, and large, poor cities everywhere else. This leads to a dilemma for policy makers – should we try and make the small, successful, cities, bigger, or do the work needed to make our big cities more successful? The government’s emphasis seems to have swung back to expanding successful places in the South and East, with a particular focus on Cambridge.

Cambridge is undoubtedly a great success story for the UK, and potentially a huge national asset. Decades of investment by the state in research has resulted in an exemplary knowledge-based economy, where that investment in public R&D attracts in private sector R&D in even greater proportion. Cambridge has expanded recently, developing a substantial life science campus around the south of the city, moving engineering and physical sciences research to the West Cambridge site, and developing a cluster of digital businesses around the station. But its growth is constrained by poor infrastructure (water being a particular problem), aesthetic considerations in a historic city centre (which effectively rule out high rise buildings), and the political barriers posed by wealthy and influential communities who oppose growth.

We need an economic reality check too. How much economic difference would it make, on a national scale, if Cambridge did manage to double in size – and what are the alternatives? Here’s a very rough stab at some numbers.

The gross value added per person in Cambridge was £49,000 in 2018, well above the UK average of £29,000 [1]. In Greater Manchester, by contrast, GVA per person was about £25,000, well below the UK average. This illustrates the’s UK unusual and sub-optimal economic geography – in most countries, it’s the big cities that drive the economy. In contrast, in the UK, big second tier cities, like Manchester, Birmingham, Leeds and Glasgow, underperform economically and in effect drag the economy down.

Let’s do the thought experiment where we imagine Cambridge doubles its population, from 126,000 to 252,000, taking those people from Greater Manchester’s population of 2.8 million, and assuming that they are able to add the same average GVA per person to the Cambridge economy. Since the GVA per head in Cambridge is so much higher than in GM, this would raise national GVA by about £3 billion.

In the overall context of the UK’s economy, with a total GVA of £1,900 billion, £3 billion doesn’t make a material difference. The trouble with small cities is that they are small – so, no matter how successful economically they are, even doubling their size doesn’t make much of an impact at a national scale.

As an alternative to doubling the size of Cambridge, we could raise the productivity of Greater Manchester. To achieve a £3 billion increase in GM’s output, we’d need to raise the GVA per person by just over 4.2%, to a bit more than £26,000 – still below the UK average.

That’s the importance of trying to raise the productivity of big cities – they are big. Relatively marginal improvements in productivity in Greater Manchester, Leeds, Birmingham and the West Midlands, Sheffield, Glasgow and Cardiff could cumulatively start to make a material difference to the economy on a national scale. And we know where those improvements need to be made – for example in better public transport, more R&D and support for innovative businesses, providing the skills that innovative businesses need, by addressing poor housing and public health.

I do think Cambridge should be encouraged and supported to expand, to accommodate the private sector businesses that want to take advantage of the public investment in R&D that’s happened there, and to give the people they need to work for them somewhere affordable to live.

But, as Tom Forth and I have argued in detail elsewhere, we need more centres of R&D and innovation outside the Greater Southeast, particularly in those places where the private sector already makes big investments in R&D that aren’t supported by the public sector. The government has already made a commitment, in the Levelling Up White Paper, to increase public investment in R&D outside the Greater Southeast by a third by 2025. That commitment needs to be delivered, and built on by the next government.

Finally, we should ask ourselves whether we are fully exploiting the great assets that have been built in Cambridge, not just to support the economy of a small city in East Anglia, but to drive the economy of the whole nation. How could we make sure that if a Cambridge semiconductor spin-out is expanding, it builds its factory in Newport, Gwent, rather than Saxony or Hsinchu? How can we use the huge wealth of experience in the Cambridge venture capital community to support nascent VC sectors in places like Leeds? How could we make sure a Cambridge biotech spin-out does its clinical trials in Greater Manchester [2], and then then manufactures its medicine in Cheshire or on Merseyside?

Two things are needed to make this happen. Firstly, we need place-based industrial strategies to build the innovation, skills and manufacturing capacity in relevant sectors in other parts of the UK, so these places have the absorptive capacity to make the most of innovations emerging from Cambridge. Then, we need to build institutional links between the key organisations in Cambridge and those in other emerging regional centres. In this way, we could take full advantage of Cambridge’s position as a unique national asset.

[1]. Data here is taken from the ONS’s Regional Gross Value Added (balanced) dataset and mid-year population estimates, in both cases using 2018 data. The data for local authority areas on a workplace basis, but populations are for residents. This probably flatters the productivity number for Cambridge, as it doesn’t take account of people who live in neighbouring areas and commute into the city.

At another limit, one could ask what would happen if you doubled the population of the whole county of Cambridgeshire, 650,000. As the GVA per head at the county level is £31.5k, quite a lot less than the figure for Cambridge city, this makes surprisingly little difference to the overall result – this would increase GVA by £3.15 bn, the same as a 4.2% increase in GM’s productivity.

Of course, this poses another question – why the prosperity of Cambridge city doesn’t spill over very far into the rest of the county. Anyone who regularly uses the train from Cambridge via Ely and March to Peterborough might have a theory about that.

[2]. The recent government report on commercial clinical trials in the UK, by Lord O’Shaughnessy, highlighted a drop in patients enrolled in commercial clinical trials in the UK of 36% over the last six years. This national trend has been bucked in Greater Manchester, where there has been an increase of 19% in patient recruitment, driven by effective partnership between the NIHR Greater Manchester Clinical Research Network, the GM devolved health and social care system, industry and academia.

When did the UK’s productivity slowdown begin?

The UK is now well into a second decade of sluggish productivity growth, with far-reaching consequences for people’s standard of living, for the sustainability of public services, and (arguably) for the wider political environment. It has become usual to date the beginning of this new period of slow productivity growth to the global financial crisis around 2008, but I increasingly suspect that the roots of the malaise were already in place earlier in the 2000s.


UK Labour productivity. Data: ONS, Output per hour worked, chained volume measure, 7 July 2023 release. Fit: non-linear least squares fit to two exponential growth functions, continuous at break point. Best fit break point is 2004.9.

My plot shows the latest release of whole-economy quarterly productivity data from the ONS. I have fitted the data to a function representing two periods of exponential growth, with different time constants, constrained to be continuous at a time of break. There are four fitting parameters in this function – the two time constants, the level at the break point, and the time of break. My best fit shows a break point at 2004.9.


Residuals for the fit to the quarterly productivity data shown above.

The plot of the residuals to the fit is shown above. This shows that the goodness of fit is comparable across the whole time range (with the exception of the spikes representing the effect of the pandemic). There are deviations from the fit corresponding to the effect of booms and recessions, but the deviations around the time of the Global financial crisis are comparable with those in earlier boom/bust cycles.

How sensitive is the fit to the timing of the break point? I’ve redone the fits constraining the year of the break point, and calculated at each point the normalised chi-squares (i.e. the sum of the squared differences between data and model, divided by the number of data points). This is shown below.


Normalised chi-squared – i.e. sum of the squares of the differences between productivity data and the two exponential model, for fits where the time of break is constrained.

The goodness of fit varies smoothly around an optimum value of the time of break near 2005. A time of break at 2008 produces a materially worse quality of fit.

Can we quantify this further and attach a probability distribution to the year of break? I don’t think so using this approach – we have no reason to suppose that the deviations between model and fit are drawn from a Gaussian, which would be the assumption underlying traditional approaches to ascribing confidence limits to the fitting parameters. I believe there are Bayesian approaches to addressing this problem, and I will look into those for further work.

But for now, this leaves us with a hypothesis that the character of the UK economy, and the global context in which it operated, had already made the transition to a low productivity growth state by the mid-2000’s. In this view, the financial crisis was a symptom, not a cause, of the productivity slowdown.

What the UK should – and should not – do about semiconductors

What should be in the long-delayed UK Semiconductor Strategy? My previous series of three blogposts set out the global context, the UK’s position in the global semiconductor world, some thoughts on the future directions of the industry, and some of the options open to the UK. Here, in summary, is a list of actions I think the UK should – and should not – take.

1. The UK should… (& there’s no excuse not to)

The UK government has committed to spending £700m on an exascale computer. It should specify that processor design should be from a UK design house. After decades of talking about using government procurement to drive innovation, the UK government should give it a try.

Why?
The UK has real competitive strength in processor design, and this sub-sector will become more and more important. AI demands exponentially more computing power, but the end of Moore’s law limits supply of computing power from hardware improvements, so design optimisation for applications like AI becomes more important than ever.

2. The UK should… (though it probably won’t, as it would be expensive, difficult, & ideologically uncomfortable)

The UK government should buy ARM outright from its current owner, SoftBank, and float it on the London Stock Exchange, while retaining a golden share to prevent a subsequent takeover by an overseas company.

Why?
ARM is the only UK-based company with internationally significant scale & reach into global semiconductor ecosystem. It’s the sole anchor company for the UK semiconductor industry. Ownership & control matters; ARM’s current overseas ownership makes it vulnerable to takeover & expatriation.

Why not?
It would cost >£50 bn upfront. Most of this money would be recovered in a subsequent sale, and the government might even make a profit, but some money would be at risk. But it’s worth comparing this with the precedent of the post GFC bank nationalisations, at a similar scale.

3. The UK should not… (& almost certainly not possible in any case)

The UK should not attempt to create a UK based manufacturing capability in leading edge logic chips. This would need to be done by one of the 3 international companies with the necessary technical expertise – TSMC, Intel or Samsung.

Why not?
A single leading edge fab costs >£10’s billions. The UK market isn’t anywhere near big enough to be attractive by itself, and the UK isn’t in a position to compete with the USA & Europe in a $bn’s subsidy race.

Moreover, decades of neglect of semiconductor manufacturing probably means the UK doesn’t, in any case, have the skills to operate a leading edge fab.

4. The UK should not…

The UK should not attempt to create UK based manufacturing capability in legacy logic chips, which are still crucial for industrial, automotive & defence applications. The lesser technical demands of these older technologies mean this would be more feasible than manufacturing leading edge chips.

Why not?
Manufacturing legacy chips is very capital intensive, and new entrants have to compete, in a brutally cyclical world market, with existing plants whose capital costs have already been depreciated. Instead, the UK needs to work with like-minded countries (especially in Europe) to develop secure supply chains.

5. Warrants another look

The UK could secure a position in some niche areas (e.g. compound semiconductors for power electronics, photonics and optoelectronics, printable electronics). Targeted support for R&D, innovation & skills, & seed & scale-up finance could yield regionally significant economic benefits.

6. How did we end up here, and what lessons should we learn?

The UK’s limited options in this strategically important technology should make us reflect on the decisions – implicit and explicit – that led the UK to be in such a weak position.

Korea & Taiwan – with less ideological aversion to industrial strategy than UK – rode the wave of the world’s fastest developing technology while the UK sat on the sidelines. Their economic performance has surpassed the UK.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

The UK can’t afford to make the same mistakes with future technology waves. We need a properly resourced, industrial strategy applied consistently over decades, growing & supporting UK owned, controlled & domiciled innovative-intensive firms at scale.

What should the UK do about semiconductors? (PDF version)

In anticipation of the UK government’s promised semiconductor strategy, my last three posts have summarised the global state of the industry, the UK’s position in that industry, and suggested what, realistically, the UK’s options are for a semiconductor strategy.

Here are links to all three parts, and for convenience a PDF version of the whole piece.

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry
Part 3: towards a UK Semiconductor Strategy.

PDF version (1 MB):
What should the UK do about semiconductors?

What should the UK do about semiconductors? Part 3: towards a UK Semiconductor Strategy

We are currently waiting for the UK government to publish its semiconductor strategy. As context for such a strategy, my previous two blogposts have summarised the global state of the industry:

Part 1: the UK’s place in the semiconductor world
Part 2: the past and future of the global semiconductor industry

Here I consider what a realistic and useful UK semiconductor strategy might include.

To summarise the global context, the essential nations in advanced semiconductor manufacturing are Taiwan, Korea and the USA for making the chips themselves. In addition, Japan and the Netherlands are vital for crucial elements of the supply chain, particularly the equipment needed to make chips. China has been devoting significant resource to develop its own semiconductor industry – as a result, it is strong in all but the most advanced technologies for chip manufacture, but is vulnerable to being cut off from crucial elements of the supply chain.

The technology of chip manufacture is approaching maturity; the very rapid rates of increase in computing power we saw in the 1980s and 1990s, associated with a combination of Moore’s law and Dennard scaling, have significantly slowed. At the technology frontier we are seeing diminishing returns from the ever larger investments in capital and R&D that are needed to maintain advances. Further improvements in computer performance are likely to put more premium on custom designs for chips optimised for specific applications.

The UK’s position in semiconductor manufacturing is marginal in a global perspective, and not a relative strength in the context of the overall UK economy. There is actually a slightly stronger position in the wider supply chain than in chip manufacture itself, but the most significant strength is not in manufacture, but design, with ARM having a globally significant position and newcomers like Graphcore showing promise.

The history of the global semiconductor industry is a history of major government interventions coupled with very large private sector R&D spending, the latter driven by dramatically increasing sales. The UK essentially opted out of the race in the 1980’s, since when Korea and Taiwan have established globally leading positions, and China has become a fast expanding new entrant to the industry.

The more difficult geopolitical environment has led to a return of industrial strategy on a huge scale, led by the USA’s CHIPS Act, which appropriates more than $50 billion over 5 years to reestablish its global leadership, including $39 billion on direct subsidies for manufacturing.

How should the UK respond? What I’m talking about here is the core business of manufacturing semiconductor devices and the surrounding supply chain, rather than information and communication technology more widely. First, though, let’s be clear about what the goals of a UK semiconductor strategy could be.

What is a semiconductor strategy for?

A national strategy for semiconductors could have multiple goals. The UK Science and Technology Framework identifies semiconductors as one of five critical technologies, judged against criteria including their foundational character, market potential, as well as their importance for other national priorities, including national security.

It might be helpful to distinguish two slightly different goals for the semiconductor strategy. The first is the question of security, in the broadest sense, prompted by the supply problems that emerged in the pandemic, and heightened by the growing realisation of the importance and vulnerability of Taiwan in the global semiconductor industry. Here the questions to ask are, what industries are at risk from further disruptions? What are the national security issues that would arise from interruptions in supply?

The government’s latest refresh of its integrated foreign and defence strategy promises to “ensure the UK has a clear route to assured access for each [critical technology], a strong voice in influencing their development and use internationally, a managed approach to supply chain risks, and a plan to protect our advantage as we build it.” It reasserts as a model introduced in the previous Integrated Review the “own, collaborate, access” framework.

This framework is a welcome recognition of the the fact that the UK is a medium size country which can’t do everything, and in order to have access to the technology it needs, it must in some cases collaborate with friendly nations, and in others access technology through open global markets. But it’s worth asking what exactly is meant by “own”. This is defined in the Integrated Review thus: “Own: where the UK has leadership and ownership of new developments, from discovery to large-scale manufacture and commercialisation.”

In what sense does the nation ever own a technology? There are still a few cases where wholly state owned organisations retain both a practical and legal monopoly on a particular technology – nuclear weapons remain the most obvious example. But technologies are largely controlled by private sector companies with a complex, and often global ownership structure. We might think that the technologies of semiconductor integrated circuit design that ARM developed are British, because the company is based in Cambridge. But it’s owned by a Japanese investment bank, who have a great deal of latitude in what they do with it.

Perhaps it is more helpful to talk about control than ownership. The UK state retains a certain amount of control of technologies owned by companies with a substantial UK presence – it has been able in effect to block the purchase of the Newport Wafer Fab by the Chinese owned company Nexperia. But this new assertiveness is a very recent phenomenon; until very recently UK governments have been entirely relaxed about the acquisition of technology companies by overseas companies. Indeed, in 2016 ARM’s acquisition by Softbank was welcomed by the then PM, Theresa May, as being in the UK’s national interest, and a vote of confidence in post-Brexit Britain. The government has taken new powers to block acquisitions of companies through the National Security and Investment Act 2021, but this can only be done on grounds of national security.

The second goal of a semiconductor strategy is as part of an effort to overcome the UK’s persistent stagnation of economic productivity, to “generate innovation-led economic growth” , in the words of a recent Government response to a BEIS Select Committee report. As I have written about at length, the UK’s productivity problem is serious and persistent, so there’s certainly a need to identify and support high value sectors with the potential for growth. There is a regional dimension here, recognised in the government’s aspiration for the strategy to create “high paying jobs throughout the UK”. So it would be entirely appropriate for a strategy to support the existing cluster in the Southwest around Bristol and into South Wales, as well as to create new clusters where there are strengths in related industry sectors

The economies of Taiwan and Korea have been transformed by their very effective deployment of an active industrial strategy to take advantage of an industry at a time of rapid technological progress and expanding markets. There are two questions for the UK now. Has the UK state (and the wider economic consensus in the country) overcome its ideological aversion to active industrial strategy on the East Asian model to intervene at the necessary scale? And, would such an intervention be timely, given where semiconductors are in the technology cycle? Or, to put it more provocatively, has the UK left it too late to capture a significant share of a technology that is approaching maturity?

What, realistically, can the UK do about semiconductors?

What interventions are possible for the UK government in devising a semiconductor strategy that addresses these two goals – of increasing the UK’s economic and military security by reducing its vulnerability to shocks in the global semiconductor supply chain, and of improving the UK’s economic performance by driving innovation-led economic growth? There is a menu of options, and what the government chooses will depend on its appetite for spending money, its willingness to take assets onto its balance sheet, and how much it is prepared to intervene in the market.

Could the UK establish the manufacturing of leading edge silicon chips? This seems implausible. This is the most sophisticated manufacturing process in the world, enormously capital intensive and drawing on a huge amount of proprietary and tacit knowledge. The only way it could happen is if one of the three companies currently at or close to the technology frontier – Samsung, Intel or TSMC – could be enticed to establish a manufacturing plant in the UK. What would be in it for them? The UK doesn’t have a big market, it has a labour market that is high cost, yet lacking in the necessary skills, so its only chance would be to advance large direct subsidies.

In any case, the attention of these companies is elsewhere. TSMC is building a new plant in Arizona, at a cost of $40 billion, while Samsung’s new plant in Texas is costing $25 billion, with the US government using some of the CHIPS act money to subsidise these investments. Despite Intel’s well-reported difficulties, it is planning significant investment in Europe, supported by inducements from EU and its member states under the EU Chips act. Intel has committed €12 billion to expanding its operations in Ireland and €17 billion for a new fab in the existing semiconductor cluster in Saxony, Germany.

From the point of view of security of supply, it’s not just chips from the leading edge that are important; for many applications, in automobiles, defence and industrial machinery, legacy chips produced by processes that are no longer at the leading edge are sufficient. In principle establishing manufacturing facilities for such legacy chips would be less challenging than attempting to establish manufacturing at the leading edge. However, here, the economics of establishing new manufacturing facilities is very difficult. The cost of producing chips is dominated by the need to amortise the very large capital cost of setting up a fab, but a new plant would be in competition with long-established plants whose capital cost is already fully depreciated. These legacy chips are a commodity product.

So in practise, our security of supply can only be assured by reliance on friendly countries. It would have been helpful if the UK had been able to participate in the development of a European strategy to secure semiconductor supply chains, as Hermann Hauser has argued for. But what does the UK have to contribute, in the creation of more resilient supply chains more localised in networks of reliably friendly countries?

The UK’s key asset is its position in chip design, with ARM as the anchor firm. But, as a firm based on intellectual property rather than the big capital investments of fabs and factories, ARM is potentially footloose, and as we’ve seen, it isn’t British by ownership. Rather it is owned and controlled by a Japanese conglomerate, which needs to sell it to raise money, and will seek to achieve the highest return from such a sale. After the proposed sale to Nvidia was blocked, the likely outcome now is a floatation on the US stock market, where the typical valuations of tech companies are higher than they are in the UK.

The UK state could seek to maintain control over ARM by the device of a “Golden Share”, as it currently does with Rolls-Royce and BAE Systems. I’m not sure what the mechanism for this would be – I would imagine that the only surefire way of doing this would be for the UK government to buy ARM outright from Softbank in an agreed sale, and then subsequently float it itself with the golden share in place. I don’t suppose this would be cheap – the agreed price for the thwarted Nvidia take over was $66 billion. The UK government would then attempt to recoup as much of the purchase price as possible through a subsequent floatation, but the presence of the golden share would presumably reduce the market value of the remaining shares. Still, the UK government did spend £46 billion nationalising a bank.

What other levers does the UK have to consolidate its position in chip design? Intelligent use of government purchasing power is often cited as an ingredient of a successful industrial policy, and here there is an opportunity. The government made the welcome announcement in the Spring Budget that it would commit £900 m to build an exascale computer to create a sovereign capability in artificial intelligence. The procurement process for this facility should be designed to drive innovation in the design, by UK companies, of specialised processing units for AI with lower energy consumption.

A strong public R&D base is a necessary – but not sufficient – condition for an effective industrial strategy in any R&D intensive industry. As a matter of policy, the UK ran down its public sector research effort in mainstream silicon microelectronics, in response to the UK’s overall weak position in the industry. The Engineering and Physical Research Council announces on its website that: “In 2011, EPSRC decided not to support research aimed at miniaturisation of CMOS devices through gate-length reduction, as large non-UK industrial investment in this field meant such research would have been unlikely to have had significant national impact.” I don’t think this was – or is – an unreasonable policy given the realities of the UK’s global position. The UK maintains academic research strength in areas such III-V semiconductors for optoelectronics, 2-d materials such as graphene, and organic semiconductors, to give a few examples.

Given the sophistication of state of the art microelectronic manufacturing technology, for R&D to be relevant and translatable into commercial products it is important that open access facilities are available to allow the prototyping of research devices, and with pilot scale equipment to demonstrate manufacturability and facilitate scale-up. The UK doesn’t have research centres on the scale of Belgium’s IMEC, or Taiwan’s ITRI, and the issue is whether, given the shallowness of the UK’s industry base, there would be a customer base for such a facility. There are a number of university facilities focused on supporting academic researchers in various specialisms – at Glasgow, Manchester, Sheffield and Cambridge, to give some examples. Two centres are associated with the Catapult Network – The National Printable Electronics Centre in Sedgefield, and the Compound Semiconductor Catapult in South Wales.

This existing infrastructure is certainly insufficient to support an ambition to expand the UK’s semiconductor sector. But a decision to enhance this research infrastructure will need a careful and realistic evaluation of what niches the UK could realistically hope to build some presence in, building on areas of existing UK strength, and understanding the scale of investment elsewhere in the world.

To summarise, the UK must recognise that, in semiconductors, it is currently in a relatively weak position. For security of supply, the focus must be on staying close to like-minded countries like our European neighbours. For the UK to develop its own semiconductor industry further, the emphasis must be on finding and developing particular niches where the UK’s does have some existing strength to build on, and there is the prospect of rapidly growing markets. And the UK should look after its one genuine area of strength, in chip design.

Four lessons for industrial strategy

What should the UK do about semiconductors? Another tempting, but unhelpful, answer is “I wouldn’t start from here”. The UK’s current position reflects past choices, so to conclude, perhaps it’s worth drawing some more general lessons about industrial strategy from the history of semiconductors in the UK, and globally.

1. Basic research is not enough

The historian David Edgerton has observed that it is a long-running habit of the UK state to use research policy as a substitute for industrial strategy. Basic research is relatively cheap, compared to the expensive and time-consuming process of developing and implementing new products and processes. In the 1980’s, it became conventional wisdom that governments should not get involved in applied research and development, which should be left to private industry, and, as I recently discussed at length, this has profoundly shaped the UK’s research and development landscape. But excellence in basic research has not produced a competitive semiconductor industry.

The last significant act of government support for the semiconductor industry in the UK was the Alvey programme of the 1980s. The programme was not without some technical successes, but it clearly failed in its strategic goal of keeping the UK semiconductor industry globally competitive. As the official evaluation of the programme concluded in 1991 [1]: “Support for pre-competitive R&D is a necessary but insufficient means for enhancing the competitive performance of the IT industry. The programme was not funded or equipped to deal with the different phases of the innovation process capable of being addressed by government technology policies. If enhanced competitiveness is the goal, either the funding or scope of action should be commensurate, or expectations should be lowered accordingly”.

But the right R&D institutions can be useful; the experience of both Japan and the USA shows the value of industry consortia – but this only works if there is already a strong, R&D intensive industry base. The creation of TSMC shows that it is possible to create a global giant from scratch, and this emphasises the role of translational research centres, like Taiwan’s ITRI and Belgium’s IMEC. But to be effective in creating new businesses, such centres need to have a focus on process improvement and manufacturing, as well as discovery science.

2. Big is beautiful in deep tech.

The modern semiconductor industry is the epitome of “Deep Tech”: hard innovation, usually in the material or biological domains, demanding long term R&D efforts and large capital investments. For all the romance of garage-based start-ups, in a business that demands up-front capital investments in the $10’s of billions and annual research budgets on the scale of medium size nation states, one needs serious, large scale organisations to succeed.

The ownership and control of these organisations does matter. From a national point of view, it is important to have large firms anchored to the territory, whether by ownership or by significant capital investment that would be hard to undo, so ensuring the permanence of such firms is the legitimate business of government. Naturally, big firms often start as fast growing small ones, and the UK should make more effort to hang on to companies as they scale up.

3. Getting the timing right in the technology cycle

Technological progress is uneven – at any given time, one industry may be undergoing very dramatic technological change, while other sectors are relatively stagnant. There may be a moment when the state of technology promises a period of rapid development, and there is a matching market with the potential for fast growth. Firms that have the capacity to invest and exploit such “windows of opportunity”, to use David Sainsbury’s phrase, will be able to generate and capture a high and rising level of added value.

The timing of interventions to support such firms is crucial, and undoubtedly not easy, but history shows us that nations that are able to offer significant levels of strategic support at the right stage can see a material impact on their economic performance. The recent rapid economic growth of Korea and Taiwan is a case in point. These countries have gone beyond catch-up economic growth, to equal or surpass the UK, reflecting their reaching the technological frontier in high value sectors such as semiconductors. Of course, in these countries, there has been a much closer entanglement between the state and firms than UK policy makers are comfortable with.

Real GDP per capita at purchasing power parity for Taiwan, Korea and the UK. Based on data from the IMF. GDP at PPP in international dollars was taken for the base year of 2019, and a time series constructed using IMF real GDP growth data, & then expressed per capita.

4. If you don’t choose sectors, sectors will choose you

In the UK, so-called “vertical” industrial strategy, where explicit choices are made to support specific sectors, have long been out of favour. Making choices between sectors is difficult, and being perceived to have made the wrong choices damages the reputation of individuals and institutions. But even in the absence of an explicitly articulated vertical industrial strategy, policy choices will have the effect of favouring one sector over another.

In the 1990s and 2000s, UK chose oil and gas and financial services over semiconductors, or indeed advanced manufacturing more generally. Our current economic situation reflects, in part, that choice.

[1] Evaluation of the Alvey Programme for Advanced Information Technology. Ken Guy, Luke Georghiou, et al. HMSO for DTI and SERC (1991)

What should the UK do about semiconductors? Part 2: the past and future of the global semiconductor industry

This is the second post in a series of three, considering the background to the forthcoming UK Government Semiconductor Strategy.

In the first part, The UK’s place in the semiconductor world, I discussed the new global environment, in which a tenser geopolitical situation has revived a policy climate around the world which is much more favourable to large scale government interventions in the industry, I sketched the global state of the semiconductor industry and tried to quantify the UK’s position in the semiconductor world.

Here, I discuss the past and future of semiconductors, mentioning some of the important past interventions by governments around the world that have shaped the current situation, and I speculate on where the industry might be going in the future.

Finally, in the third part, I’ll ask where this leaves the UK, and speculate on what its semiconductor strategy might seek to achieve.

Active industrial policy in the history of semiconductors

The history of the global semiconductor industry involves a dance between governments around the world and private companies. In contrast to the conviction of the predominantly libertarian ideology of Silicon Valley, the industry wouldn’t have come into existence and developed in the form we now know without a series of major, and expensive, interventions by governments across the world.

But, to caricature the claims of some on the left, there is an idea that it was governments that created the consumer electronic products we all rely on, and private industry has simply collected the profits. This view doesn’t recognise the massive efforts private industry has made, spending huge sums on the research and development needed to perfect manufacturing processes and bring them to market. Taking the USA alone, in 2022 US the government spent $6 billion on semiconductor R&D, compared to private industry’s $50.2 billion.

The semiconductor industry emerged in the 1960s in the USA, and in its early days more than half of its sales were to the US government. This was an early example of what we would now call “mission driven” innovation, motivated by a “moonshot project”. The “moonshot project” of the 1960s was driven by a very concrete goal – to be able to drop a half-tonne payload anywhere on the earth’s surface, with a precision measured in hundreds of meters.

Semiconductors were vital to achieve this goal – the first mass-produced computers based on integrated circuits were developed as the guidance systems of Minuteman intercontinental ballistic missiles. Of course, despite its military driving force, this “moonshot” produced important spin-offs – the development of space travel to the point at which a series of manned missions to the moon were possible, and increasing civilian applications of the more much cheaper, more powerful and more reliable computers that solid-state electronics made possible.

The USA is where the semiconductor industry started, but it played a central role in three East Asian development miracles. The first to exploit this new technology was Japan. While the USA was exploiting the military possibilities of semiconductors, Japan focused on their application in consumer goods.

By the early 1980’s, though, Japanese companies were producing memory chips more efficiently than the USA, while Nikon took a leading position in the photolithography equipment used to make integrated circuits. In part the Japanese competitive advantage was driven by their companies’ manufacturing prowess and their attentiveness to customer needs, but the US industry complained, not entirely without justification, that their success was built on the theft of intellectual property, access to unfairly cheap capital, the protection of home markets by trade barriers, and government funded research consortia bringing together leading companies. These are recurring ingredients of industrial policy as executed by East Asian developmental states, first executed successfully in Taiwan and in Korea, and now being applied on a continental scale by China.

An increasingly paranoid USA’s response to this threat from Japan to its technological supremacy in semiconductors was to adopt some industrial strategy measures itself. The USA relaxed its stringent anti-trust laws to allow US companies to collaborate in R&D through a consortium called SEMATECH, half funded by the federal government. Sematech was founded in 1987, and in the first 5 years of its operation was supported by $500 m of Federal funding, leading to some new self-confidence for the US semiconductor industry.

Meanwhile both Korea and Taiwan had identified electronics as a key sector through which to pursue their export-focused development strategies. For Taiwan, a crucial institution was the Industrial Technology Research Institute, in Hsinchu. Since its foundation in 1973, ITRI had been instrumental in supporting Taiwan’s industrial base in moving closer to the technology frontier.

In 1985 the US-based semiconductor executive Morris Chang was persuaded to lead ITRI, using this position to create a national semiconductor industry, in the process spinning out the Taiwan Semiconductor Manufacturing Company. TSMC was founded as a pure-play foundry, contract manufacturing integrated circuits designed by others and focusing on optimising manufacturing processes. This approach has been enormously successful, and has led TSMC to its globally leading position.

Over the last decade, China has been aggressively promoting its own semiconductor industry. The 2015 “Made in China 2025” identified semiconductors as a key sector for the development of a high tech manufacturing sector, setting the target of 70% self-sufficiency by 2025, and a dominant position in global markets by 2045.

Cheap capital for developing semiconductor manufacturing was provided through the state-backed National Integrated Circuit Industry Investment Fund, amounting to some $47 bn (though it seems the record of this fund has been marred by corruption allegations). The 2020 directive “Several Policies for Promoting the High-quality Development of the Integrated Circuit Industry and Software Industry in the New Era” reinforced these goals with a package of measures including tax breaks, soft loans, R&D and skills policies.

While the development of the semiconductor industry in Taiwan and Korea was generally welcomed by policy-makers in the West, a changing geopolitical climate has led to much more anxiety about China’s aspirations. The USA has responded by an aggressive programme of bans on the exports of semiconductor manufacturing tools, such as high end lithography equipment, to China, and has persuaded its allies in Japan and the Netherlands to follow suit.

Industrial policy in support of the semiconductor industry hasn’t been restricted to East Asia. In Europe a key element of support has been the development of research institutes bringing together consortia of industries and academia; perhaps the most notable of these is IMEC in Belgium, while the cluster of companies that formed around the electronics company Phillips in Eindhoven now includes the dominant player in equipment for extreme UV lithography, AMSL.

In Ireland, policies in support of inward investment, including both direct and indirect financial inducements, and the development of institutions to support skills innovation, persuaded Intel to base their European operations in Ireland. This has resulted in this small, formerly rural, nation becoming the second largest exporter of integrated circuits in Europe.

In the UK, government support for the semiconductor industry has gone through three stages. In the postwar period, the electronics industry was a central part of the UK’s Cold War “Warfare State”, with government institutions like the Royal Signals and Radar Establishment at Malvern carrying out significant early research in compound semiconductors and optoelectronics.

The second stage saw a more conscious effort to support the industry. In the mid-to-late 1970’s, a realisation of the potential importance of integrated circuits coincided with a more interventionist Labour government. The government, through the National Enterprise Board, took a stake in a start-up making integrated circuits in South Wales, Inmos. The 1979 Conservative government was much less interventionist than its predecessor, but two important interventions were made in the early 1980’s.

The first was the Alvey Programme, a joint government/private sector research programme launched in 1983. This was an ambitious programme of joint industry/government research, worth £350m, covering a number of areas in information and communication technology. The results of this programme were mixed; it played a significant role in the development of mobile telephony, and laid some important foundations for the development of AI and machine learning. In semiconductors, however, the companies it supported, such as GEC and Plessey, were unable to develop a lasting competitive position in semiconductor manufacturing and no longer survive.

The second intervention arose from a public education campaign ran by the BBC; a small Cambridge based microcomputer company, Acorn, won the contract to supply BBC-branded personal computers in support of this programme. The large market created in this way later gave Acorn the headroom to move into the workstation market with reduced instruction set computing architectures, from which was spun-out the microprocessor design house ARM.

In the third stage, the UK government adopted a market fundamentalist position. This involved a withdrawal from government support for applied research and the run-down of government laboratories like RSRE, and a position of studied indifference about the acquisition of UK technology firms by overseas rivals. Major UK electronics companies, such as GEC and Plessey, collapsed following some ill-judged corporate misadventures. Inmos was sold, first to Thorn, then to the Franco- Italian group, SGS Thomson. Inmos left a positive legacy, with many who had worked there going on to participate in a Bristol based cluster of semiconductor design houses. The Inmos manufacturing site survives as Newport Wafer Fab, currently owned by the Dutch-based, Chinese owned company Nexperia, though its future is uncertain following a UK government ruling that Nexperia should divest its shareholding on national security grounds.

This focus on the role of interventions by governments across the world at crucial moments in the development of the industry shouldn’t overshadow the huge investments in R&D made by private companies around the world. A sense of the scale of these investments is given by the figure below.

R&D expenditure in the microelectronics industry, showing Intel’s R&D expenditure, and a broader estimate of world microelectronics R&D including semiconductor companies and equipment manufacturers. Data from the “Are Ideas Getting Harder to Find?” dataset on Chad Jones’s website. Inflation corrected using the US GDP deflator.

The exponential increase in R&D spending up to 2000 was driven by a similarly exponential increase in worldwide semiconductor sales. In this period, there was a remarkable virtuous circle of increasing sales, leading to increasing R&D, leading in turn to very rapid technological developments, driving further sales growth. In the last two decades, however, growth in both sales and in R&D spending has slowed down


Global semiconductor sales in billions of dollars. Plot from “Quantum Computing: Progress and Prospects” (2019), National Academies Press, which uses data from the Semiconductor Industry Association.

Possible futures for the semiconductor industry

The rate of technological progrèss in integrated circuits between 1984 and 2003 was remarkable and unprecedented in the history of technology. This drove an exponential increase in microprocessor computing power, which grew by more than 50% a year. This growth arose from two factors, As is well-known, the number of transistors on a silicon chip grew exponentially, as predicted by Moore’s Law. This was driven by many unsung, but individually remarkable, technological innovations in lithography (to name just a couple of examples, phase shift lithography, and chemically amplified resists), allowing smaller and smaller features to be manufactured.

The second factor is less well known – by a phenomenon known as Dennard scaling, as transistors get smaller they operate faster. Dennard scaling reached its limit around 2004, as the heat generated by microprocessors became a limiting factor. After 2004, microprocessor computer power increased at a slower rate, driven by increasing the number of cores and parallelising operations, resulting in rates of increase around 23% a year. This approach itself ran into diminishing returns after 2011.

Currently we are seeing continued reductions in feature sizes, together with new transistor designs, such as finFETs, which in effect allow more transistors to be fitted into a given area by building them side-on. But further increases in computer power are increasingly being driven by optimising processor architectures for specific tasks, for example graphical processing units and specialised chips for AI, and by simply multiplying the number of microprocessors in the server farms that underlie cloud computing.

Slowing growth in computer power. The growth in processor performance since 1988. Data from figure 1.1 in Computer Architecture: A Quantitative Approach (6th edn) by Hennessy & Patterson.

It’s remarkable that, despite the massive increase in microprocessor performance since the 1970’s, and major innovations in manufacturing technology, the underlying mode of operation of microprocessors remains the same. This is known by the shorthand of CMOS, for Complementary Metal Oxide Semiconductor. Logic gates are constructed from complementary pairs of field effect transistors consisting of a channel in heavily doped silicon, whose conductance is modulated by the application of an electric field across an insulating oxide layer from a metal gate electrode.

CMOS isn’t the only way of making a logic gate, and it’s not obvious that it is the best one. One severe limitation on our computing is its energy consumption. This matters at a micro level; the heat generated by a laptop or mobile phone is very obvious, and it was problems of heat dissipation that underlay the slowdown in the growth in microprocessor power around 2004. It’s also significant at a global level, where the energy used by cloud computing is becoming a significant share of total electricity consumption.

There is a physical lower limit to the energy that computing uses – this is the Landauer limit on the energy cost of a single logical operation, a consequence of the second law of thermodynamics. Our current technology consumes more than three orders of magnitude more energy than is theoretically possible, so there is room for improvement. Somewhere in the universe of technologies that don’t exist, but are physically possible, lies a superior computing technology to CMOS.

Many alternative forms of computing have been tried out in the laboratory. Some involve different materials to silicon: compound semiconductors or new forms of carbon like nanotubes and graphene. In some, the physical embodiment of information is, not electric charge, but spin. The idea of using individual molecules as circuit elements – molecular electronics – has a long and somewhat chequered history. None of these approaches has yet made a significant commercial impact; incumbent technologies are always hard to displace. CMOS and its related technologies amount to a deep nanotechnology implemented at a massive scale; the huge investment in this technology has in effect locked us into a particular technology path.

There are alternative, non-semiconductor based, computing paths that are worth mentioning, because they may become important in the future. One is to copy biology; our own brains deliver enormous computing power at remarkably low energy cost, with an architecture that is very different from the von Neumann architecture that human-built computers follow, and a basic unit that is molecular. Various radical approaches to computing take some inspiration from biology, whether that is the new architectures for CMOS that underlie neuromorphic computing, or entirely molecular approaches based on DNA.

Quantum computing, on the other hand, offers the potential for another exponential leap forward in computing power – in principle. Many practical barriers remain before this potential can be turned into practise, however, and this is a topic for another discussion. Suffice it to say that, on a timescale of a decade or so, quantum computers will not replace conventional computers for anything more than some niche applications, and in any case they are likely to be deployed in tandem with conventional high performance computers, as accelerators for specific tasks, rather than as general purpose computers.

Finally, I should return to the point that semiconductors aren’t just valuable for computing; the field of power electronics is likely to become more and more important as we move to a net zero energy system. We will need a much more distributed and flexible energy grid to accommodate decentralised renewable sources of electricity, and this needs solid-state power electronics capable of handling very high voltages and currents – think of replacing house-size substations by suitcase-size solid-state transformer. Widespread uptake of electric vehicles and the need for widely available rapid charging infrastructures will place further demands on power electronics. Silicon is not suitable for these applications, which require wide-band gap semiconductors such as diamond, silicon carbide and other compound semiconductors.

Sources

Chip War: The Fight for the World’s Most Critical Technology, by Chris Miller, is a great overview of the history of this technology.

Semiconductors in the UK: Searching for a strategy. Geoffrey Owen, Policy Exchange, 2022. Very good on the history of the UK industry.

To Every Thing There is a Season – lessons from the Alvey Programme for Creating an Innovation Ecosystem for Artificial Intelligence, by Luke Georghiou. Reflections on the Alvey Programme by one of the researchers who carried out its official evaluation.

Are Ideas getting hard to find, Bloom, Jones, van Reenan and Webb. American Economic Review (2020). An influential paper on diminishing rates of return on R&D, taking the semiconductor industry as a case study.

Quantum Computing: Progress and Prospects (2019), National Academies Press.

Up next: What should the UK do about semiconductors? Part 3: towards a UK semiconductor strategy

Science and innovation policy for hard times

This is the concluding section of my 8-part survey of the issues facing the UK’s science and innovation system, An Index of Issues in UK Science and Innovation Policy.

The earlier sections were:
1. The Strategic Context
2. Some Overarching Questions
3. The Institutional Landscape
4. Science priorities: who decides?
5. UK Research and Innovation
6. UK Government Departmental Research
7. Horizon Europe (and what might replace it) and ARIA

8.1. A “science superpower”? Understanding the UK’s place in the world.

The idea that the UK is a “science superpower” has been a feature of government rhetoric for some time, most recently repeated in the Autumn Statement speech. What might this mean?

If we measure superpower status by the share of world resources devoted to R&D (both public and private) by single countries, there are only two science superpowers today – the USA and China, with a 30% and 24% share of science spending (OECD MSTI figures for 2019 adjusted for purchasing power parity, including all OECD countries plus China, Taiwan, Russia, Singapore, Argentina and Romania). If we take the EU as a single entity, that might add a third, with a 16% share (2019 figure, but excluding UK). The UK’s share is 2.5% – thus a respectable medium size science power, less than Japan (8.2%) and Korea (4.8%), between France (3.1%) and Canada (1.4%).

It’s often argued, though, that the UK achieves better results from a given amount of science investment than other countries. The primary outputs of academic science are scientific papers, and we can make an estimate of a paper’s significance by asking how often it is cited by other papers. So another measure of the UK’s scientific impact – the most flattering to the UK, it turns out – is to ask what fraction of the world’s most highly cited papers originate from the UK.

By this measure, the two leading scientific superpowers are, once again, the USA and China, with 32% and 24% shares respectively; on this measure the EU collectively, at 29%, does better than China. The UK scores well by this measure, at 13.4%, doing substantially better than higher spending countries like Japan (3.1%) and Korea (2.7%).

A strong science enterprise – however measured – doesn’t necessarily by itself translate into wider kinds of national and state power. Before taking the “science superpower” rhetoric serious we need to ask how these measures of scientific activity and scientific activity translate into other measures of power, hard or soft.

Even though measuring the success of our academic enterprise by its impact on other academics may seem somewhat self-referential, it does have some consequences in supporting the global reputation of the UK’s universities. This attracts overseas students, in turn bringing three benefits: a direct and material economic contribution to the balance of payments, worth £17.6 bn in 2019, a substantial subsidy to the research enterprise itself, and, for those students who stay, a source of talented immigrants who subsequently contribute positively to the economy.

The transnational nature of science is also significant here; having a strong national scientific enterprise provides a connection to this wider international network and strengthens the nation’s ability to benefit from insight and discoveries made elsewhere.

But how effective is the UK at converting its science prowess into hard economic power? One measure of this is the share of world economic value added in knowledge and technology intensive businesses. According to the USA’s NSF, the UK’s share of value added in this set of high productivity manufacturing and services industries that rely on science and technology is 2.6%. We can compare this with the USA (25%), China (25%), and the EU (18%). Other comparator countries include Japan (7.9%), Korea (3.7%) and Canada (1.2%).

Does it make sense to call the UK a science superpower? Both on the input measure of the fraction of the world’s science resources devoted to science, and on the size of the industry base this science underpins, the UK is an order of magnitude smaller than the world leaders. In the historian David Edgerton’s very apt formulation, the UK is a large Canada, not a small USA.

Where the UK does outperform is in the academic impact of its scientific output. This does confer some non-negligible soft power benefits of itself. The question to ask now is whether more can be done to deploy this advantage to address the big challenges the nation now faces.

8.2. The UK can’t do everything

The UK’s current problems are multidimensional and its resources are constrained. With less than 3% of the world’s research and development resources, no matter how effectively these resources are deployed, the UK will have to be selective in the strategic choices it makes about research priorities.

In some areas, the UK may have some special advantages, either because the problems/opportunities are specific to the UK, or because history has given the UK a comparative advantage in a particular area. One example of the former might be the development of technologies for exploiting deep-water floating offshore wind power. In the latter category, I believe the UK does retain an absolute advantage in researching nuclear fusion power.

In other areas, the UK will do best by being part of larger transnational research efforts. At the applied end, these can be in effect led by multinational companies with a significant presence in the UK. Formal inter-governmental collaborations are effective in areas of “big science” – which combine fundamental science goals with large scale technology development. For example, in high energy physics the UK has an important presence in CERN, and in radio astronomy the Square Kilometer Array is based in the UK. Horizon Europe offered the opportunity to take part in trans-European public/private collaborations on a number of different scales, and if the UK isn’t able to associate with Horizon Europe other ways of developing international collaborations will have to be built.

But there will remain areas of technology where the UK has lost so much capability that the prospect of catching up with the world frontier is probably unrealistic. Perhaps the hardware side of CMOS silicon technology is in this category (though significant capability in design remains).

8.3. Some pitfalls of strategic and “mission driven” R&D in the UK

One recently influential approach to defining research priorities links them to large-scale “missions”, connected to significant areas of societal need – for example, adapting to climate change, or ensuring food security. This has been a significant new element in the design of the current EU Horizon Programme (see EU Missions in Horizon Europe).

For this approach to succeed, there needs to be a match between the science policy “missions” and a wider, long term, national strategy. In my view, there also needs to be a connection to the specific and concrete engineering outcomes that are needed to make an impact on wider society.

In the UK, there have been some moves in this direction. The research councils in 2011 collectively defined six major cross-council themes (Digital Economy; Energy; Global Food Security; Global Uncertainties; Lifelong Health and Wellbeing; Living with Environmental Change), and steered research resources into (mostly interdisciplinary) projects in these areas. More recently, UKRI’s Industrial Strategy Challenge Fund was funded from a “National Productivity Investment Fund” introduced in the 2016 Autumn Statement and explicitly linked to the Industrial Strategy.

These previous initiatives illustrate three pitfalls of strategic or “mission driven” R&D policy.

  • The areas of focus may be explicitly attached to a national strategy, but that strategy proves to be too short-lived, and the research programmes it inspires outlive the strategy itself. The Industrial Strategy Challenge Fund was linked to the 2017 Industrial Strategy, but this strategy was scrapped in 2021, despite the fact that the government was still controlled by the same political party.
  • Research priorities may be connected to a lasting national priority, but the areas of focus within that priority are not sufficiently specified. This leads to a research effort that risks being too diffuse, lacking a commitment to a few specific technologies and not sufficiently connected to implementation at scale. In my view, this has probably been the case in too much research in support of low-carbon energy.
  • In the absence of a well-articulated strategy from central government, agencies such as Research Councils and Innovate UK guess what they think the national strategy ought to be, and create programmes in support of that guess. This then risks lacking legitimacy, longevity, and wider join-up across government.

In summary, mission driven science and innovation policy needs to be informed by carefully thought through national strategy that commands wide support, is applied across government, and is sustained over the long-term.

8.4. Getting serious about national strategy

The UK won’t be able to use the strengths of its R&D system to solve its problems unless there is a settled, long-term view about what it wants to achieve. What kind of country does the UK want to be in 2050? How does it see its place in the world? In short, it needs a strategy.

A national strategy needs to cut across a number of areas. There needs to be an industrial strategy, about how the country makes a living in the world, how it ensures the prosperity of its citizens and generates the funds needed to pay for its public services. An energy strategy is needed to navigate the wrenching economic transition that the 2050 Net Zero target implies. As our health and social care system buckles under the short-term aftermath of the pandemic, and faces the long-term challenge of an ageing population, a health and well-being strategy will be needed to define the technological and organisational innovation needed to yield an affordable and humane health and social care system. And, after the lull that followed the end of the cold war, a strategy to ensure national security in an increasingly threatening world must return to prominence.

These strategies need to reflect the real challenges that the UK faces, as outlined in the first part of this series. The goals of industrial strategy must be to restore productivity growth and to address the UK’s regional economic imbalances. Innovation and skills must be a central part of this, and given the condition large parts of the UK find themselves in, there need to be conscious efforts to rebuild innovation and manufacturing capacity in economically lagging regions. There needs to be a focus on increasing the volume of high value exports (both goods and services) that are competitive on world markets. The goal here should be to start to close the balance of payments gap, but in addition international competitive pressure will also bring productivity improvements.

An energy strategy needs to address both the supply and demand side to achieve a net zero system by 2050, and to guarantee security of supply. It needs to take a whole systems view at the outset, and to be discriminating in deciding which aspects of the necessary technologies can be developed in the UK, and which will be sourced externally. Again, the key will be specificity. For example, it is not enough to simply promote hydrogen as a solution to the net zero problem – it’s a question of specifying how it is made, what it is used for, and identifying which technological problems are the ones that the UK is in a good position to focus on and benefit from, whether that might be electrolysis, manufacture of synthetic aviation fuel, or whatever.

A health and well-being strategy needs to clarify the existing conceptual confusion about whether the purpose of a “Life Sciences Strategy” is to create high value products for export, or to improve the delivery of health and social care services to the citizens of the UK. Both are important, and in a well-thought through strategy each can support the other. But they are distinct purposes, and success in one does not necessarily translate to success in the other.

Finally, a security strategy should build on the welcome recognition of the 2021 Integrated Review that UK national security needs to be underpinned by science and technology. The traditional focus of security strategy is on hard power, and this year’s international events remind us that this remains important. But we have also learnt that the resilience of the material base of economy can’t be taken for granted. We need a better understanding of the vulnerabilities of the supply chains for critical goods (including food and essential commodities).

The structure of government leads to a tendency for strategies in each of these areas to be developed independently of each other. But it’s important to understand the way these strategies interact with each other. We won’t have any industry if we don’t have reliable and affordable low carbon energy sources. Places can’t improve their economic performance if large fractions of their citizens can’t take part in the labour market due to long-term ill-health. Strategic investments in the defence industry can have much wider economic spillover benefits.

For this reason it is not enough for individual strategies to be left to individual government departments. Nor is our highly centralised, London-based government in a position to understand the specific needs and opportunities to be found in different parts of the country – there needs to be more involvement of devolved nation and city-region governments. The strategy needs to be truly national.

8.5. Being prepared for the unexpected

Not all science should be driven by a mission-driven strategy. It is important to maintain the health of the basic disciplines, because this provides resilience in the face of unwelcome surprises. In 2019, we didn’t realise how important it would be to have some epidemiologists to turn to. Continuing support for the core disciplines of physical, biological and medical science, engineering, social science and the humanities should remain a core mission of the research councils, the strength of our universities is something we should preserve and be proud of, and their role in training the researchers of the future will remain central.

Science and innovation policy also needs to be able to create the conditions that produce welcome surprises, and then exploit them. We do need to be able to experiment in funding mechanisms and in institutional forms. We need to support creative and driven individuals, and to recognise the new opportunities that new discoveries anywhere in the world might offer. We do need to be flexible in finding ways to translate new discoveries into implemented engineering solutions, into systems that work in the world. This spirit of experimentation could be at the heart of the new agency ARIA, while the rest of the system should be flexible enough to adapt and scale up any new ways of working that emerge from these experiments.

8.7 Building a national strategy that endures

A national strategy of the kind I called for above isn’t something that can be designed by the research community; it needs a much wider range of perspectives if, as is necessary, it’s going to be supported by a wide consensus across the political system and wider society. But innovation will play a key role in overcoming our difficulties, so there needs to be some structure to make sure insights from the R&D system are central to the formulation and execution of this strategy.

The new National Science and Technology Council, supported by the Office for Science and Technology Strategy, could play an important role here. Its position at the heart of government could give it the necessary weight to coordinate activities across all government departments. It would be a positive step if there was a cross-party commitment to keep this body at the heart of government; it was unfortunate that with the Prime Ministerial changes over the summer and autumn the body was downgraded and subsequently restored. To work effectively its relationships with the Government Office for Science, the Council for Science and Technology need to be clarified.

UKRI should be able to act as an important two-way conduit between the research and development community and the National Science and Technology Council. It should be a powerful mechanism for conveying the latest insights and results from science and technology to inform the development of national strategy. In turn, its own priorities for the research it supports should be driven by that national strategy. To fulfil this function, UKRI will be have to develop the strategic coherence that the Grant Review has found to be currently lacking.

The 2017 Industrial Strategy introduced the Industrial Strategy Council as an advisory body; this was abruptly wound up in 2021. There is a proposal to reconstitute the Industrial Strategy Council as a statutory body, with a similar status, official but independent of government, to the Office of Budgetary Responsibility or the Climate Change Committee. This would be a positive way of subjecting policy to a degree of independent scrutiny, holding the government of the day to account, and ensuring some of the continuity that has been lacking in recent years.

8.8 A science and innovation system for hard times

Internationally, the last few years have seen a jolting series of shocks to the optimism that had set in after the end of the cold war. We’ve had a worldwide pandemic, there’s an ongoing war in Europe involving a nuclear armed state, we’ve seen demonstrations of the fragility of global supply chains, while the effects of climate change are becoming ever more obvious.

The economic statistics show decreasing rates of productivity growth in all developed countries; there’s a sense of the worldwide innovation system beginning to stall. And yet one can’t fail to be excited by rapid progress in many areas of technology; in artificial intelligence, in the rapid development and deployment of mRNA vaccines, in the promise of new quantum technologies, to give just a few examples. The promise of new technology remains, yet the connection to the economic growth and rising living standards that we came to take for granted in the post-war period seems to be broken.

The UK demonstrates this contrast acutely. Despite some real strengths in its R&D system, its economic performance has fallen well behind key comparator nations. Shortcomings in its infrastructure and its healthcare system are all too obvious, while its energy security looks more precarious than for many years. There are profound disparities in regional economic performance, which hold back the whole country.

If there was ever a time when we could think of science as being an ornament to a prosperous society, those times have passed. Instead, we need to think of science and technology as the means by which our society becomes more prosperous and secure – and adapt our science and technology system so it is best able to achieve that goal.

From self-stratifying films to levelling up: A random walk through polymer physics and science policy

After more than two and a half years at the University of Manchester, last week I finally got round to giving an in-person inaugural lecture, which is now available to watch on Youtube. The abstract follows:

How could you make a paint-on solar cell? How could you propel a nanobot? Should the public worry about the world being consumed by “grey goo”, as portrayed by the most futuristic visions of nanotechnology? Is the highly unbalanced regional economy of the UK connected to the very uneven distribution of government R&D funding?

In this lecture I will attempt to draw together some themes both from my career as an experimental polymer physicist, and from my attempts to influence national science and innovation policy. From polymer physics, I’ll discuss the way phase separation in thin polymer films is affected by the presence of surfaces and interfaces, and how in some circumstances this can result in films that “self-stratify” – spontaneously separating into two layers, a favourable morphology for an organic solar cell. I’ll recall the public controversies around nanotechnology in the 2000s. There were some interesting scientific misconceptions underlying these debates, and addressing these suggested some new scientific directions, such as the discovery of new mechanisms for self-propelling nano- and micro- scale particles in fluids. Finally, I will cover some issues around the economics of innovation and the UK’s current problems of stagnant productivity and regional inequality, reflecting on my experience as a scientist attempting to influence national political debates.

Is the UK economy more R&D intensive than we’ve thought?

1. On the discrepancy between ONS and HMRC estimates of business R&D.

In the UK, there are two ways in which the total amount of business R&D (BERD) is measured. The Office for National Statistics conducts an annual survey of business, in which a sample of firms is asked to report how much R&D has been carried out. Meanwhile firms can report what R&D they have carried out to the taxman – HMRC – in order to claim R&D tax credits, which according to circumstances can be a reduction of their liability for corporation tax, or an actual cash payment. In recent years, the two measures of business R&D have increasingly diverged, with substantially more R&D expenditure being claimed for tax credits than is reported in the BERD survey.

The divergence between HM Revenue and Customs (HMRC) and Business enterprise research and development (BERD) estimates of research and development (R&D) expenditure. Source: ONS.

The ONS has been looking into this divergence, and has recently published a note which concludes that the primary reason for the discrepancy is an undersampling of the small business population. On this basis, it has adjusted its previous estimate for business R&D substantially upwards – in 2020, the revision is from £26.9 bn to £43 bn. In future years, ONS will introduce improved, more robust, methodologies that will include a wider range of SMEs in the sample they survey.

In principle, there could be two possible causes for the growing divergence between the total business R&D recorded by the ONS BERD survey and the amounts underlying claims to HMRC for R&D tax credits:

a. The incentives of R&D tax credits have caused businesses to stretch the definition of R&D so they can get money for activities that are part of normal business (e.g. market research, working out how to use new equipment). This is exacerbated by the growth of an industry of consultants offering their services to firms to help them claim this money (in return for a %).

b. The ONS survey of firms (the BERD survey) has systematically undersampled a population of small and medium enterprises (SMEs), which turn out to have more R&D activity than previously believed.

In favour of (a) – the discrepancy between the two measures hasn’t been entirely static, as you’d expect if it was simply a question of missing a population of firms who had always been doing R&D at a constant rate, but who have only just been discovered. The gap has risen from £7.3 bn in 2014, to £16.6 bn in 2018. So for this explanation to hold, we need to believe not only that there is an existing population of SMEs carrying out R&D that has previously been undetected, but that this population has been substantially growing. Is R&D growth in the SME sector at a rate of £2.3 bn a year plausible? I’m not sure.

Moreover, the incentives for stretching the definition of R&D to claim free money are obvious. HMRC accept that some claims are outright fraudulent, estimating that 4.9% of the cost of the scheme is attributable to error and fraud. But there’s a big grey area between outright fraud and creative interpretation of the “Frascati” definitions of R&D.

ONS argues in favour of (b), backing this up with a detailed comparison of the microdata from the ONS survey and HMRCs returns. To add some anecdotal support, work in Greater Manchester in collaboration with a data science consultancy does seem to have identified a population of innovative SMEs in GM which has previously remained invisible, in the sense that they are firms who don’t engage with universities or with Innovate UK.

In truth, the real answer is probably some mixture of the two. We’ll learn more once the new methodology has produced a complete data set identifying the sectors and geographical locations of R&D performing firms.

2. Policy implications

Figures for total R&D spending (including both business and public sector R&D) as a proportion of GDP provide a useful measure of the overall research intensity of the UK economy and form the basis for international comparisons. The previous figure for R&D intensity – about 1.7% – put the UK between the Czech Republic and Italy. The new estimates suggest a revised figure of 2.4%, which would put the UK roughly on a par with Belgium, slightly above France, but behind the USA and Germany, and still a long way behind leaders like Korea and Israel. Of course, when making these international comparisons, a natural question is how accurate are the R&D statistics in these other countries. This is a good question that could be investigated by OECD, who collate international R&D statistics.

The international comparison has driven a target for R&D intensity that the government committed to – that it would achieve an R&D intensity equal to the OECD average. At the time when the target was formulated this average was indeed equal to 2.4%. However, the OECD average is a moving target since other countries are increasing their own R&D – it’s now above 2.5%. One can also ask whether a target to achieve international mediocrity is stretching enough.

There are more fundamental issues with the idea of having an R&D intensity target at all. One quirk of expressing the target as a % of GDP is that one can achieve it by driving down the denominator; certainly GDP growth in the UK has been disappointing for the last 12 years, as the Prime Minister has reminded us. One could argue that a numerical target for R&D is arbitrary and one should concentrate more on the instrumental outcomes one wants to achieve from the research – higher growth, more rapid and cost effective progress towards net zero, better population health outcomes etc. As I wrote myself recently in my survey of the UK R&D landscape:

“An R&D target should be thought of not as an end in itself, but as a means to an end. We should start by asking what kind of economy do we need, if we are to meet the big strategic goals that I discussed in the first part of this series. Given a clearer view about that, we’ll have a better understanding the necessary fraction of national resources that we should devote to research and development. I don’t know if that would produce the exact figure of 2.4%, but I wouldn’t be surprised if it was significantly higher.”

Perhaps the most problematic implication of a BERD upgrade is the enduring puzzle that productivity growth remains very slow. This extra, previously unrecorded R&D, doesn’t seem to have translated into productivity growth as we would expect.

This raises the broader question of why we think the government should support business R&D at all, whether through R&D tax credits or through other means. The classical argument is that private sector R&D leads to wider benefits from the economy that aren’t captured by the firms that make the investments, so in the absence of government firms will invest less in R&D that would be socially optimal. This leads to the question of whether all kinds of R&D, in all kinds of company (e.g. large and small) lead to equal degrees of wider spillover effects (and the same question can be asked of intangible investments more generally). If the kinds of R&D that are now being revealed with the new methodology do have smaller spillovers than other types, one might ask what kind of interventions could improve those.

3. Political implications

As others have observed, the chief danger of the revision is that in times of fiscal retrenchment, the government could declare “mission accomplished” and delay or cancel increases in public R&D. This danger seems very real given the direction of the current government. The opposition, on the other hand, has called for an R&D target of 3% of GDP, so there is plenty of room there.

There is an argument that the revision suggests that public R&D is even more effective than we thought in generating private sector R&D – the leverage effect is stronger than we thought. For this argument to be convincing, we’d need to understand the degree to which the companies doing this R&D are connected to the wider innovation system. But it doesn’t then support the wider argument for R&D as a driver of productivity growth – we have the R&D intensity we aspired to, so why aren’t we seeing the benefits in the productivity figures?

There are possible arguments that our focus in business R&D has been too much on the big incumbents – the GSKs and Rolls Royces – whose R&D is very visible. On the other hand, this connects to the long-running question of why we don’t have more of those big incumbents? At this point, we should recall that there are only two UK companies in the world top-100 of R&D performers – AstraZeneca and GSK. So why aren’t some of these previously unseen R&D intensive companies scaling up to become the new big players?

There is much yet to understand here.

An index of issues in UK science and innovation policy – part 7: Horizon Europe (and what might replace it) and ARIA

In the first part of this series attempting to sum up the issues facing UK science and innovation policy, I tried to set the context by laying out the wider challenges the UK government faces, asking what problems we need our science and innovation system to contribute to solving.

In the second part of the series, I posed some of the big questions about how the UK’s science and innovation system works, considering how R&D intensive the UK economy should be, the balance between basic and applied research, and the geographical distribution of R&D.

In the third part, I discussed the institutional landscape of R&D in the UK, looking at where R&D gets done in the UK.

In the fourth part, looking at the funding system, I considered who pays for R&D, and how decisions are made about what R&D to do.

In the fifth part, I looked in more detail at UK Research and Innovation, the government’s main agency for funding academic science.

In the sixth part, I looked at the other routes that the UK government funds R&D, particularly through government departments.

In this, the final section of my survey of the routes by which the UK government funds R&D, I turn to two areas with the most uncertainty. The first of these is the future of the UK’s participation in the EU Horizon programme. I’ll discuss the distinctive roles of EU funding, and what might replace it in the increasingly likely scenario that the UK is not able to associate. The second is the new agency the Advanced Research and Invention Agency, set up by Act of Parliament in early 2022, and currently just establishing itself; here I’ll suggest some early thoughts about the role this might play in the overall system.

7.1. Horizon Europe – past participation and future prospects of association

In the past, the UK government has funded R&D indirectly through the EU Horizon programme, which provided research grants to UK researchers in HE and to UK businesses, often as part of larger collaborative programmes with researchers and businesses from elsewhere in Europe. EU research funding to UK universities and businesses has been on a very material scale; of course ultimately this money came from the UK’s contributions to the overall budget. In the UK’s national accounts, this was accounted for by a notional cost that reached a high point of £1.46 billion in 2019.

Because EU research money was allocated competitively, there wasn’t a direct relationship between the money the UK put into the budget and the research money the UK received. In fact, because of the UK’s relative research strength, the UK got back significantly more money than it put in. According to an analysis of the 2007-2013 cycle, the UK’s indicative contribution to the budget was €5.4 bn, but it received €8.8 bn of funding for research, development and innovation.

After the UK decided to leave the EU, a consensus developed that the UK should seek to stay associated with the EU’s R&D programmes, an option already taken up by other non-member states such as Switzerland, Norway and Israel. The Trade and Cooperation Agreement between the EU and the UK contained a draft protocol establishing the UK’s association with Horizon Europe (with the exception of the European Innovation Council). “The Parties affirm that the draft protocols set out below have been agreed in principle and will be submitted to the Specialised Committee on Participation in Union Programmes for discussion and adoption. The United Kingdom and European Union reserve their right to reconsider participation in the programmes, activities and services listed in Protocols [I and II] before they are adopted since the legal instruments governing the Union programmes and activities may be subject to change. The draft protocols may also need to be amended to ensure their compliance with these instruments as adopted.”

If the UK does associate, it will need to contribute financially to the Horizon Europe programme. In contrast to the situation when the UK was a member state, when it received more back from EU R&D programmes than it notionally contributed, as an associated country it would need to cover not only the full cost of R&D activities funded in the UK through Horizon UK, but also a substantial additional overhead. The money for this was set aside in the 2021 Comprehensive Spending Review; it amounted to £1.3 bn in 21/22, rising to £2.1bn 24/25.

As I write, the draft protocol has not yet been finalised by the EU side, and given the wider political situation, it seems increasingly unlikely that it will be finalised any time soon. The UK government made a commitment at the time of the 2021 CSR that, in the event of the UK not associating, the money set aside would be retained in the science budget, redeployed in a set of programmes that reproduced the benefits of EU association – the so-called “Plan B”.

On July 20th, the government released more details of “Plan B”, restating the commitment to use the Horizon money for alternative science programmes. “In the event we are unable to associate, we will use the funding allocated to Horizon Europe at the 2021 Spending Review to build on our existing R&D programmes with flagship new domestic and international research and innovation investments to support top talent, drive end-to-end innovation and foster international collaboration with EU and global partners.”

7.2. The Three Pillars of Horizon Europe

The EU’s R&D programmes are agreed for seven year cycles; the current cycle – Horizon Europe – assigns €95.5 billion for the period from 2021-27. The overall goals of the programme are specified in terms of the strategic goals of the European Union – tackling climate change, meeting the UN’s Sustainable Development Goals, and boosting the EU’s competitiveness and economic growth.

To support these broad goals, Horizon Europe supports three “Pillars”. The first of these is “Excellent Science”. This includes the European Research Council, together with schemes supporting early career researchers and collaborative research and training for PhD students. The European Research Council supports investigator led basic science and humanities research; this has a very high reputation in the scientific community, for reasons I’ll discuss below. However, it is important to remember that it is a relatively small part of the overall Horizon programme – it’s been allocated €16 bn in the current cycle.

The second pillar is for “Global Challenges and European Industrial Competitiveness”, which supports research collaborations built around sectors, challenges and missions. These typically involve both academic and industrial researchers in multinational collaborations.

The third pillar is new to the current cycle – “Innovative Europe” is focused on developing more high tech start-up companies, with a new “European Innovation Council”, a “European Institute of Innovation and Technology”, and support for regional innovation ecosystems. In the event of association, the UK will opt out of the “EIC accelerator” – that part of pillar 3 which provides investment funding to companies.

Underpinning the whole programme is an aspiration to create a “European Research Area”, with free and easy movement of people and research groups across the continent, lubricated by exchange schemes for scientists (particularly at early career stages) and cross-border transferability of grants. In the past the UK has benefitted from this, with a scientific and institutional infrastructure that has made the country an attractive destination for scientists from other European countries.

7.3. Why scientists love the European Research Council

Amongst elite scientists in the UK, the main driving force for an enthusiasm for the UK to associate with Horizon Europe is to be able to continue to participate in the European Research Council. This, in part, simply reflects how successful the UK has been in winning competitive funding through this route. For example, in the competition for the most established researchers – the Advanced Grants, which provide €2.5 million over 5 years for a single investigator and their team – UK based researchers won 22% of all grants between 2008 and 2020, compared to 16% and 12% to the two next most successful nations, Germany and France respectively (source).

But beyond the self-interest of UK scientists, why is the European Research Council so highly thought of? It has a clarity of purpose, with a single-minded focus on investigator driven basic research, with no predetermined priorities, but with an emphasis on supporting high risk/high gain proposals. It is correctly perceived as highly competitive, attracting proposals from the most outstanding researchers across Europe – currently its grantees have won nine Nobel prizes. Its decisions are made by a peer review process which is widely considered to be fair, rigorous and well executed.

Peer review isn’t easy to do well. In section 2 of this series, in discussing a possible world-wide slow-down in scientific productivity, I mentioned the suggestion that peer review can lead to conservatism and can suppress radical new ideas. In section 5, I suggested that there was a lack of confidence in the scientific community in the credibility of the peer review systems that the UK Research Councils run. In the light of these concerns, it’s worth asking what the European Research Council gets right about peer review (while recognising that even the ERC’s process is probably not perfect, for example in tricky areas like handling highly interdisciplinary proposals).

In my opinion, there’s nothing magic about the ERC’s approach to peer review. The process involves committees of experts (and, to declare a personal interest, I recently served on the expert panel for Advanced Grants in my own field of Condensed Matter Physics). Those panels invite written comments on proposals from worldwide specialists they choose for their appropriateness to judge individual proposals. In a final meeting, the panels consider the referees’ reports, with interviews with the proposers to give them the chance to respond to criticisms, and come to a collective judgement about which proposals to give highest priority for funding.

What makes this work? The starting point must be high quality panels, with a good range of expertise, the ability to take a broad view, and an effective chair. At its best, the ERC has developed a virtuous circle, in which the high quality of the proposals means that outstanding scientists are prepared to put the time in to serve on panels, while in turn it is the credibility of the process that attracts applications from the best scientists from across a whole continent. It is the researchers on the panels who select the remote referees, using their knowledge of the field to select the most appropriate ones, and then applying their own critical scientific judgement to resolve any discrepancies and differences of opinion between referees. Sufficient time is set aside for in-depth decisions – a single proposal round will involve two panel meetings, each of which can take up to a week.

Meanwhile administrative support is provided by high quality subject specialists working full-time for ERC as programme managers. In the UK, the research councils were forced to make serious cuts on their office staff in the early 2010s, because it was mistakenly believed that these subject specialists represented an administrative overhead, rather than being a precondition for the most effective allocation of R&D funding. This mistake should not be repeated (and, indeed, should be corrected).

7.4. “Plan B” for non-association

The “Plan B” document published this July (Supporting UK R&D and collaborative research beyond European programmes) usefully sets out some principles for how the money set aside for association with Horizon Europe will be used in the event that association doesn’t materialise. But details of implementation remain sketchy, and delivery may prove challenging to the existing agencies and bodies that will be charged with executing these schemes.

These agencies are mostly in UKRI, with a particularly important role for Innovate UK, with the National Academies potentially playing a role in the “talent” schemes. These are largely fellowships at various career stages, that will be in part fill some of the role of the European Research Council, though without the benefits of the institutional strength that ERC has developed, as outlined in the last section.

The emphasis of measures taken so far has been on stabilising the system, in particular keeping in the UK outstanding scientists who have been awarded ERC grants, but who can’t take them up without moving to an EU member state. The commitment has been made to guarantee the funding of any Horizon UK grant awarded to UK based researchers for the lifetime of the grant. It is going to be important to ensure that this happens without bureaucratic hurdles, in perception or reality, as HE institutions in the EU will be making energetic efforts to recruit these researchers.

The last point emphasises the importance of making sure the UK remains an attractive destination for overseas scientists, and promoting researcher mobility to make sure that the UK is centrally integrated in international networks of expertise. The plan here remains vague, but states the intention to fund “bottom-up collaborations with researchers in partner countries around the globe; multilateral and bilateral collaborations; and Third Country Participation in Horizon Europe”.

Measures for supporting business R&D will be funnelled through Innovate UK; it seems these will largely build on existing schemes. The aim is to support both domestic and international collaborations. The international dimension will be particularly important in supporting high technology SMEs to participate in trans-national supply chains and innovation systems, many of which, of course, involve EU member states.

The local and regional dimension of support for innovation systems is also important. EU funding – including structural funding as well as direct R&D funding – has been important in developing clusters in economically lagging parts of the UK, such as Northern England, Wales and Northern Ireland. The Shared Prosperity Fund is likely to offer only a partial substitute for EU structural funds, so it is encouraging to see a commitment to drive “the development of emerging clusters throughout the UK”, and the statement that the “Plan B” portfolio “will support our mission of levelling up the UK and build on our commitment to increase domestic R&D investment outside of the Greater Southeast by at least a third over the spending review period and at least 40% by 2030.

Moving forward with the association of the UK with Horizon Europe would seem to require a breakthrough in wider EU/UK relations that currently doesn’t seem very likely. In the absence of such a breakthrough, the priority needs to be for the new administration to confirm the funding of plan B, and move very quickly to turn what are currently rather high level plans into deliverable programmes.

7.5 The Advanced Research and Invention Agency (ARIA)

The most recent addition to the UK’s R&D funding landscape is the new funding agency, the Advanced Research and Invention Agency. This was established by an Act of Parliament, finalised in early 2022. It was a personal priority of the Prime Minister’s former chief advisor, Dominic Cummings, who emphasised the need to have a funding agency with the freedom to take big risks, modelled loosely on the US agency ARPA. ARPA was set up in the late 1950’s to ensure technological supremacy for the US armed forces, and research it supported has underpinned world-changing technological innovations such as the internet, the satellite location system that GPS evolved from, and stealth aircraft.

The Act of Parliament establishing ARIA does indeed give a huge amount of latitude in defining its goals and modes of operation; much is left to the discretion of the CEO and the board. The major lever the government retains is the level of funding allocated; the initial commitment is to spend £800m by 24/25. This is a relatively small amount seen in the context of the £20 billion total R&D budget planned for 24/25. Nonetheless, given that we’re already halfway through 22/23, that leaves only two years to get some entirely new programmes off the ground.

The Act does give the Secretary of State powers of Intervention on grounds of national security, and it is easy to imagine that these could be used quite widely. Nonetheless, there is some irony in the way the independence from government that was taken away from the Research Councils has been given to this new agency.

Given that the appointments of the Chief Executive and Chair have only relatively recently been announced, there is not yet clarity about what the new agency will do. I outlined my own views about how such an agency should operate in a piece from January 2020, UK ARPA: an experiment in science policy.

As I wrote then, “If we want to support visionary research, whose applications may be 10-20 years away, we should be prepared to be innovative – even experimental – in the way we fund research. And just as we need to be prepared for research not to work out as planned, we should be prepared to take some risks in the way we support it, especially if the result is less bureaucracy. There are some lessons to take from the long (and, it needs to be stressed, not always successful) history of ARPA/DARPA. To start with its operating philosophy, an agency inspired by ARPA should be built around the vision of the programme managers. But the operating philosophy needs to be underpinned by as enduring mission and clarity about who the primary beneficiaries of the research should be. And finally, there needs to be a deep understanding of how the agency fits into a wider innovation landscape.”

My starting point would be to recognise that pluralism & diversity in funding agencies is a good in itself, and we need to innovate in the way we support innovation. ARPA at its best represented an approach to funding where the focus was on the programme manager – or better, programme leader as the creative force. These leaders should be tasked with assembling and orchestrating teams of talented people to achieve ambitious programmes with concrete goals.

The archetype of the visionary leader is perhaps J.C.R. Licklider, who accepted a position with ARPA in 1962, because if offered an opportunity to realise his vision of computer networking. The research he funded at ARPA laid many of the foundations of modern computing, including the principles of networking that led to the internet, and the principles of human/computer interaction that were further developed a the XEROX PARC laboratory to give us the graphical interfaces that we all take for granted together.

ARPA benefited from a complete clarity of mission – its role was to ensure that the US armed forces enjoyed technological supremacy over any potential rival. That makes clear who its beneficiaries should be – the US Armed Forces.

What should ARIA’s mission be, and who are its beneficiaries? This remains to be decided, but from my perspective it is important to make clear that its primary beneficiaries should neither be the academic community, nor industry. Both communities will be crucial in delivering the mission, but it should not be primarily for their benefit. Instead, I believe that ARIA should focus on one, or a subset of one, of the important strategic goals that the UK state currently faces, as I outlined in the first part of this series.

For me, the most obvious candidate is the challenge of driving down the cost of achieving net zero greenhouse gas emissions to a point where the global transition can be driven by economics, rather than politics.

Up next…

In the next and final part of this series, I will attempt to sum up, with some key priorities for the UK R&D system.

Edited 20 Sept to make clear that the proposed opt-out from Pillar 3 of Horizon Europe only covers the European Innovation Council Fund. My thanks to Martin Smith for pointing this out.