Another Modern Industrial Strategy

This is a slightly expanded version of an article published last week in Research Professional – The latest industrial strategy has made choices

Last week’s Industrial Strategy Policy Paper is the latest chapter in the chequered history of UK Industrial Strategies. For nearly three decades after Thatcher’s ascent to power, the UK’s strategy was not to have an industrial strategy, which was a concept associated with money-losing supersonic airliners and cars with square steering wheels. But that conventional wisdom has been challenged by a global financial crisis and nearly two decades of economic stagnation, so after a number of stops and starts over the last decade, a fully developed Industrial Strategy has now arrived.

This strategy is not entirely new – there are strong echoes of the 2017 strategy from Theresa May’s government, driven by the then Business Secretary Greg Clark. The five “critical technologies” of the 2023 Science and Technology Framework are preserved, and other interventions of the previous government, like Investment Zones and Freeports, survive with only minor rebranding. This continuity is to be welcomed.

The first test of any strategy is whether it makes choices. Here I think the signals are positive. The Green Paper identified 8 key sectors, but now 30 detailed sub-sectors, “frontier industries”, are identified. This welcome degree of granularity has been underpinned by a detailed methodology, evaluating both the economic potential of the frontier industries, and the degree to which policy can have an impact.

So, under the banner of Advanced Manufacturing, aerospace and automobiles appear as expected, but perhaps the inclusion of agri-tech is less anticipated. Clean energy industries include wind (onshore, offshore and floating), fission and fusion, but not solar. There’s more focus on non-STEM-based sectors than in previous strategies, including creative, financial services and professional and business services. Unsurprisingly, given the current geopolitical situation, defence is prominent, but here the final list of frontier industries awaits the publication of a Defence Industrial Strategy.

The geographical aspect of industrial strategy is much more prominent in this strategy than in predecessors. An analysis of industrial clusters leans heavily on the previous government’s cluster map, as well as accepting the importance of increasing the productivity of the UK’s underperforming second tier cities. But in identifying those places with existing strengths in the strategy’s priority frontier industries, or with the potential for better exploiting agglomeration economies, some places won’t make the cut. There will be gaps in the Industrial Strategy map.

Discussions of industrial strategy distinguish between “horizontal” interventions, that aim to improve the business environment without discriminating between sectors, and “vertical” policies focusing on particular types of industry. Here, the signals are that even the more “horizontal” measures will be tilted in favour of the (roughly) 30 frontier industries.

Addressing the cost of energy is prominent here; the effect of the very cost of energy relative to overseas competitors on the viability of many UK industries has been much discussed recently. Perhaps just as important as the cost of energy are the delays in making new grid connections for industrial electricity. Here there will be difficult choices too; if industrial energy prices are lowered, someone else will have to pay more, whether that is the domestic consumer, or the tax-payer. We’re paying for policy choices made a decade (and longer) ago, which resulted in an over-reliance on gas & an underinvested network.

Other “horizontal” interventions include trade policy, access to finance, addressing regulation and removing planning barriers. Skills and innovation are two areas where the choice needs to be made about how much to direct policy towards one’s selected sectors.

The skills section seems slightly underpowered; it highlights skills shortages in the priority sectors, recognises that we have too many underqualified young people, and acknowledges that adult and continuing education has declined. But solving these problems is laid at the door of Skills England (and counterparts in devolved nations). An infographic makes a welcome assertion of the central role of universities in delivering an industrial strategy, but without recognising the pressure the sector is under.

UK Government research and development spending since 2012. Outturn: ONS Research and development (R&D) expenditure by the UK government, 2023 Datasets. Allocations from CSR 2021 and CSR 2025 corrected for inflation using GDP deflators, OBR March 2025 predictions for future years.

For innovation, it’s appropriate to recognise that government spending on R&D is now at a high level – the last government did make significant increases in R&D spending, and the recent spending review locked those increases in place in real terms. The question will be, to what extent, that R&D spending will be directed at the industrial strategy priority areas.

Here the document is unequivocal on the principle – research funding, including that delivered through UKRI, will be pivoted towards the industrial strategy priorities. Again, the need to make choices is accepted. As widely trailed, R&D funding will be more rigorously sorted into three categories; basic curiosity-driven research, delivery of government priorities, and supporting private sector innovation-led growth. What remains to be decided are the relative size of these three buckets.

Will this industrial strategy deliver the economic growth that has been so lacking for nearly two decades? I think the two key requirements for success are genuine commitment across the whole of government, and consistency of purpose over time, with a mechanism for tracking progress and holding the government to account.

On the former, there are encouraging signs that the strategy has the support of Treasury; in addition to its sponsor department, the Department of Business and Trade, the priorities of DSIT and DESNZ are clearly reflected. But can the Home Office can be persuaded that some skilled immigration might be needed? Does DfE recognise that universities are actually quite an important part of the skills system?

Most importantly, this strategy probably needs a decade to make a significant impact. It’s good that it will be supported and monitored by an Industrial Strategy Council with statutory powers and supporting infrastructure. It would be even better if it could generate the kind of consensus that would allow it to survive the inevitable political changes to come.

The civic university in hard times

Universities in the UK at the moment are broke and unloved. In these circumstances, the temptation is going to be to withdraw to “core business” – teaching students, and for research intensives, doing the kind of research that pushes the institution up the international league tables, to attract the overseas students whose fees prop the whole system up. In a period of retrenchment, it might be tempting for managements to see supporting the role of universities in their communities as a dispensable luxury. I think this would be a profound mistake.

This isn’t to understate the difficulty UK universities find themselves in. Around three quarters of them are expected to be in deficit next year, and about a hundred are now actively restructuring or making staff redundant. This follows a 40% real terms erosion in fees for home students, and a business model, reliant on growing overseas student numbers, which has become both politically unpopular and exposed to geopolitical risk. The latest proposal – of a levy on international student fees – is both another financial blow, and a symbol of the way universities find themselves on the wrong side of culture war discourse. What’s quite clear is that, whatever recognition there might be in government of the university sector’s troubles, the sector is simply not a high priority for a government facing difficult issues on all sides.

Meanwhile the UK’s slow-burning economic crisis continues. At the root of this is the UK’s weak productivity growth since the GFC or before; productivity is now about 25% down on what it would have been if the pre-GFC trend had continued. This is the direct cause, not just of the very weak wage growth seen in the last 15 years, but also of the fiscal difficulties recent governments have faced, and the resulting pressure on public services. Much of what’s gone wrong with politics and economics in the UK can be put down to this productivity problem.

There’s a big regional dimension to the productivity problem. In contrast to most developed countries, the UK’s big second tier cities aren’t drivers of national productivity; instead their productivity is lower than the UK average. And beyond the big cities in the North and Midlands, there are very weak peripheral economies in in deindustrialised areas and coastal towns. In these places, local government services are under huge pressure, having received the brunt of the 2010’s austerity, while remaining legally obliged to maintain increasingly costly services like social care and supporting children with special educational needs and disabilities.

Thinking about these two factors together, it should be clear that no good will come of universities just asking the government to hand over more money, to carry on doing the things they’ve always been doing. However bad the crisis in universities is, the larger economic problems, and the constellation of social problems that follow from those, mean that universities are just too far down the list of priorities for the government.

Instead, universities need to lean in to a national effort to deal with the country’s economic and social problems, with a particular emphasis on the problems of the places where they are located and the communities that live in those places. Many universities are already doing good work in this area, but upping the intensity of these activities probably does mean that universities will have to do some things differently. Already, a letter from the Secretary of State to University leaders, sent last November, has identified 5 priorities, two of which are highly relevant here: “Make a stronger contribution to economic growth” and “Play a greater civic role in their communities”.

The two are closely linked. UK universities are institutions with international reach, but they’re still located in specific places and that’s where their influence is greatest. What can they do for the towns and cities where they are located? To our citizens, perhaps the most tangible impact of universities on their towns and cities is in the professionals they train – the teachers who teach their children, the nurses, doctors and health workers who treat them and their families. University research can provide an evidence base to inform policy for local and regional governments. Student and staff volunteering can support charities and community groups. The cultural institutions associated with universities, and outreach activities in schools and communities can support the cultural vitality of places.

We do, though, need to face the fact that, for many of the communities which neighbour our universities, the fundamental problem is that they are in materially poor places. If we are to be serious about asserting the economic value of our universities and the research they do, we need to demonstrate that, by supporting efforts to make communities more prosperous. This means supporting places in attracting and growing the kinds of businesses that produce tradable goods and services; this in turn will bring money into those places, and support higher wages for people with all levels of skills. In short, universities need to support inclusive economic growth in their towns and cities.

This will require universities to think differently. We’ve focused a lot on a relatively narrow, linear model of knowledge exchange, where university academics do research leading to protectable intellectual property, which then is used to produce spin-out companies. This remains important – though we do need to think about how such spin-out companies can be encouraged to scale-up in the UK to the point at which they produce material economic impacts on communities, rather than just making a few founders and venture capitalists wealthy (important though that is for creating the right incentives). But we need to take a much broader view of the innovation economies of our places.

Taking this wider view of our innovation ecosystems, increasing the number of spin-outs – both from students and staff – is important, as is the need for them to scale. But universities need to think hard about how they support the existing business base – not just big, technologically sophisticated firms that are accustomed to working with universities, but also the small and medium size enterprises who often find the labyrinthine structures of universities impossible to penetrate, and find it difficult to access financial support from government agencies. SMEs are important, but we do need to recognise the power of multinationals operating at the technology frontier in building manufacturing and innovation capacity in those parts of the country where productivity is lagging. Cities and regions are always anxious to attract inward investment, but this needs to be targeted on those firms whose presence can most effectively drive up a region’s wider innovation capabilities.

A key problem for many places with low productivity is a lack of skilled people. This is often framed as a supply problem – so, it is argued, simply increasing the supply of skilled people will help solve the productivity issues in weak regional economies. But the problem could equally be one of demand – in a town with few opportunities for high-skilled work, for people who don’t want to leave, there isn’t the incentive to put time and money into increasing their level of skills. Towns with weak economies are often locked into a low skills, low productivity, low innovation bad equilibrium – and to break out of that equilibrium, both halves of the problem need to be addressed. Existing businesses need to be supported in their innovation ambitions and new and innovative businesses need to be attracted, while those businesses need to be assured that the higher skills they need will be available to them. We need to link up the skills and innovation agendas.

This leads us to think about geographies. The excellent research facilities that our big civic universities have may seem a long way away from businesses in a town or suburb on the other side of the city centre – particularly given the poor state of transport in our cities outside London. So universities need to think hard about how to maximise their impact, not just on their campus and its immediate environs, but across whole city regions, and beyond.

This means working in partnership. The different HE institutions in a city region should work together, bound by civic university agreements, that commit them to concrete actions in support of their place. For example, the expertise of technology transfer offices can be shared across a city, the business schools could collaborate to provide management training to local SMEs, there can be a single approach to widening participation, and lifelong learning. Institutions could collaborate to develop initiatives for translational R&D and innovation diffusion focused on regional business.

These collaborations should go beyond HE institutions. Further education has a huge role to play in a joined up approach to innovation and skills; in many left-behind towns FE colleges are the anchor institutions, with strong employer networks. Partnership with businesses is crucial, and here business sector groups can be invaluable in articulating the collective needs of businesses in places. Strong relationships between universities and their local and city-regional governments are crucial. Universities should be working together with local and city-region authorities in developing and implementing local industrial strategies and growth plans.

My own work in the University of Manchester, as VP for Regional Innovation and Civic Engagement, has been guided by these principles. We’ve established a Civic University Agreement, supported by GM’s five universities and endorsed by the Mayor, and developed a collaborative body, Innovation GM, which brings together the HE sector with business sector groups and the GM Mayoral Combined Authority. We were awarded one of the pilots in the Innovation Accelerator programme, which we’ve used to support innovation diffusion in key sectors like digital and AI, and specifically to support the materials and manufacturing sector in Northeast Greater Manchester (Bury, Rochdale and Oldham), which remains one of the most economically and socially disadvantaged part of the conurbation, and forms a priority growth location through the “Atom Valley” Mayoral Development Zone. Collaborations with the FE sector have been growing, helped by the increasing coordination of GM’s nine FE colleges through the GM Colleges Group; Innovate UK’s very welcome Further Education Innovation Fund have allowed us to directly connect the FE colleges with GM’s innovation ambitions. Within the University, we’ve established a new unit, Unit M, with a specific mandate to own and drive our regional innovation agenda. I’m delighted that the government announced, in the June Comprehensive Spending Review, a Local Innovation Partnership Fund, which will build on the Innovation Accelerator pilots, extending them to 10 city-regions in the UK.

Finally, it’s worth coming back to money – and the current financial difficulties of the HE sector. It would, I think, be wrong to think of these civic activities as big sources of new funding, establishing new profit centres. Universities do need to be resourced to do new things, but this isn’t going to be an immediate boost to their bottom lines. What these civic activities will do, though, is give us a social licence to operate – and, in many cases, reconnect with the original purposes of our institutions.

Moore’s Law, past and future

Moore’s Law – and the technology it describes, the integrated circuit – has been one of the defining features of the past half century. The idea of Moore’s law has been invoked in three related senses. In its original form, it was rather a precise prediction about the rate of increase of the number of transistors to be fitted on a single integrated circuit. It’s never been a law – it’s been more of an organising principle for an industry and its supply chain – and thus a self-fulfilling prophecy. In this sense, it’s been roughly true for fifty years – but is now bumping up against physical limits.

In the second sense, Moore’s law is used more loosely as a statement about the increase in computing power, and the reduction of its cost, over time. The assertion is that computing power grows exponentially. This also was true, for a while. From the mid 1980’s to the mid 2000’s, computer power grew at a rate of 50% a year compounded, doubling every two years. In this extraordinary period, there was more than a thousandfold cumulative increase over a couple of decades.

The rate of increase in raw computer power has slowed substantially over the last two decades, following the end of Dennard scaling and the limitations of heat dissipation, but this has been counteracted to some extent by software improvements and the development of architectures specialised for particular applications. For example, the Graphics Processing Units – GPUs – that have emerged as being so important for AI are highly optimised for multiplying large matrices.

In the third sense, Moore’s Law is used as a synecdoche for the more general idea of accelerating change, that the pace of change in technology in general is exponential – or even super-exponential – in character. This of course is a commonplace in airport business books. It underpins the idea of a forthcoming singularity, as a received wisdom in Silicon Valley. The idea of the singularity has been given more salience by the recent rapid progress in artificial intelligence, and the widespread view that superhuman artificial general intelligence will soon be upon us.

In this post, I want to go back to the fundamentals – how much the basic components of computing can be shrunk in size, and what the prospects for future miniaturisation are. But this does directly bear on the question of the prospects increase in computer power, which has taken on new importance, as the material basis of the AI boom. AI has brought us to a new situation; in the classical period of fastest growth of computer power (the 80s and 90s) the supply of computing power was growing exponentially, and the opportunity was to find ways of using that power. Now, with AI, it’s the demand for computing power that is growing exponentially, and the issue is whether supply can match that demand.

Moore’s Law. From Max Roser, Hannah Ritchie, and Edouard Mathieu (2023) – “What is Moore’s Law?” Published online at OurWorldinData.org. Retrieved from: ‘https://ourworldindata.org/moores-law‘ [Online Resource]. Licensed under CC-BY.

A classical depiction of Moore’s law is shown in this plot from Our World in Data – with a logarithmic y-axis, a straight line indicates an exponential growth in the number of transistors in successive generations of microprocessor. The seemingly inexorable upward progress of the line conceals a huge amount of innovation; each upward step was facilitated by research and development of new materials and new processes. It also conceals some significant discontinuities.

For example, the earlier relationship between computer power and number of transistors was broken in the mid-2000s. Before then miniaturisation brought a double benefit – it gave you more transistors on each chip, and in addition each transistor worked faster, because it was smaller. The latter relation – Dennard scaling – broke down, because heat dissipation became a limiting factor

Another fundamental change happened in 2012. The fundamental unit of the modern integrated circuit is the metal oxide silicon field effect transistor – the mosFET. This consists channel of doped silicon, with contacts at either end. The channel is coated with a thin, insulating layer of oxide, on top of which is a metal electrode – the gate. It is the gate which controls the flow of electrical current through the channel. When physical limits meant that the planar mosFET couldn’t be shrunk any more, a new design flipped the channel into the vertical plane, so the transistors took the form of fins standing up from the plane of the silicon chip. Each side of the doped silicon fin is coated by insulating oxide and a metal gate, to form the finFET.

The patterns that make the circuits in integrated circuits are made by lithography – light is shone through a patterned mask onto a photoresist, which is subsequently developed to make the pattern physical. The lower limit on the size of the features that can be patterned in this way is ultimately set by the wavelength of light used. Through the 2010’s, lithography was based on using deep ultraviolet light created by excimer lasers – with a 193 nm wavelength. By 2020, this technique had been squeezed as far as it would go, and the 5 nm process node uses extreme UV, with a wavelength of 13.5 nm. The Dutch company ASML has a monopoly on the tools to produce EUV for lithography, each of which costs more than $100 million; the radiation is created in a metal plasma, and has to be focused entirely by mirrors.

I’ve referred to the 2020 iteration of fabrication technology as the “5 nm process”, following a long-standing industry convention of characterising successive technology generations through a single length. In the days of the planar mosFET, a single parameter characterised the size of each transistor – the gate length. There was a stable relationship between the gate length and the length characterising the node number, and there was a roughly biennial decrease in the node number, from the 1982 1.5µm process that drove the explosion of personal computers, to the 2002 90 nm process of the Pentium 4. But with the replacement of the mosFET by the finFET, circuit geometry changed and the relationship between the node size and actual dimensions of the circuit broke down. In fact, the node size now is best thought of as entirely a marketing device, on the principle that the smaller the number the better.

A better way to describe progress in the scaling down of the size uses an estimate of the minimum possible area for a transistor as the product of the metal pitch, the minimum distance between horizontal interconnects, and the contacted gate pitch, the distance from one transistor’s gate to another’s.

Minimum transistor footprint (product of metal pitch and contacted gate pitch) for successive semiconductor process nodes. Data: (1994 – 2014 inclusive) – Stanford Nanoelectronics Lab, post 2017 and projections, successive editions of the IEEE International Roadmap for Devices and Systems

My plot shows the minimum transistor footprint, calculated in this way, for each process node since 1994 (the 350 nm node). The first five nodes – until 2002 – track the exponential increase in density expected from Moore’s law – the fit represents a transistor density that doubles every 2.2 years. The last three generations of planar mosFET technology – until 2009 – show a slight easing of the pace. The switch to the finFET prolonged the trend for another decade or so. But it’s clear now that the “2 nm” node, being introduced by TSMC this year, confirms a marked levelling off of the pace of miniaturisation. For this node, there has been another change of geometry – finFETs have been replaced by vertical rows of nanowires, each completely surrounded by the metal of the gate electrode – GAA, for “gate all around”.

It has to be stressed that miniaturisation of transistors is far from the only way in which computer power can be increased. A good illustration of this comes from progress in making the ultra-powerful chips that have driven the current AI boom, such as Nvidia’s H100. The H100 itself was actually fabricated by TSMC on the “5 nm” node, the first to use AMSL’s EUV light source for lithography. But, as this article explains, only a fraction of the performance improvements of the H100 over previous generations are attributed to Moore’s law. Much of the improvement comes from more efficient ways of representing numbers and carrying out the arithmetic operations that underlie artificial intelligence.

Another factor of growing importance is in the way individual silicon chips are packaged. Many modern integrated circuits, including the H100, are not a single chip. Instead several individual chips, including both logic and memory, are mounted together on a silicon substrate, with fast interconnects to join them all up. The H100 relies on an TSMC advanced packaging technology known as “Chip on Wafer on Substrate” (CoWoS), and is an example of a “System in Package”.

What does the future hold? The latest (2023) iteration of the IEEE’s International Roadmap for Devices and Systems foresees one more iteration of the Gate All Around architecture. The 2031 node is a refinement of that which stacks two mosFETs on top of each other, one with a p-doped channel, one with an n-doped channel (this combination of p- and n- doped FETs is the fundamental unit of logic gates in CMOS technology – “complementary metal oxide silicon”, hence this is referred to as CFET). This essentially doubles the transistor density. After this, no further shrinking in dimensions is envisaged, so further increases in transistor density are to be obtained by stacking multiple tiers of circuits vertically on the wafer.

So what’s the status of Moore’s law now? I return to the 3 senses in which people talk about Moore’s law – as a technical prediction about the growth in the number of transistors on an integrated circuit, as a more general statement about increasing computer power, and as a shorthand for talking about accelerating technical change in general.

In the first, and strictest, sense, we can be definitive – Moore’s law has run its course. The rate of increase in transistor density has significantly slowed since 2020, and exponential growth with an increasing time constant isn’t exponential any more. The technology in its current form has now begun to hit limits, both physical and economic.

For the second, looser, sense, things are more arguable. Available computing power is still increasing, and we see the outcomes of that in advances such as the development of large language models. But this increased power is coming, less from miniaturisation, more from software, specialised architectures optimised for particular tasks, and advanced packaging of chips in “Systems in Package”. It’s this transition that underlies the fact that Nvidia is worth more as a company than TSMC, even though it’s TSMC that actually manufactures (and packages) the chips.

But I wonder whether these approaches will be subject to diminishing returns, in contrast with the classical period of Moore’s law, when constant, large, fractional returns were repeated year after year for decades, producing orders of magnitude cumulative improvements. We are also seeing as a major source of increasing computer power the brute-force approach of just buying more and more chips, in huge, energy consuming data centres. These kind of increases in computer power are fundamentally linear, rather than exponential, in character, and yet they are trying to meet a demand – largely from AI – which is growing exponentially.

It’s very tempting to take Moore’s law as an emblem of the idea that technological change in general is accelerating exponentially, but I think this is unhelpful. Technology isn’t a single thing that improves at a given rate; there are many technologies, and at a given time some will be accelerating, some will be stagnating, some may even be regressing. As we have seen before, the exponential improvement of a single technology never continues forever; physical or economic limits show up, and growth saturates. Continuous progress needs the continuous introduction of new technologies which can take up the baton of growth from those older technologies, whose growth is stalling.

It should be stressed here that when we talk about the end of Moore’s law, the technology that we are talking about isn’t computing in general – it is this particular way of implementing machine logic, CMOS (complementary metal oxide semiconductor). There are many ways in which we can imagine doing computing – the paradox here is that CMOS has been so successful that it has crowded out alternative approaches, some of which might have significant advantages. For example, we know that CMOS logic uses several orders of magnitude more energy per operation than the theoretical minimum (the Landauer limit).

Finally, it does bear repeating what an extraordinary period the heyday of Moore’s law and Dennard scaling was, with computer power doubling every two years, sustained over a couple of decades to produce a cumulative thousand-fold increase. For those who have lived through that period, it will be difficult to resist the belief that this rate of technological progress is part of the natural order of things.