The Missing £4 billion: making R&D work for the whole UK

Tom Forth and I have a new policy paper out, published by the Innovation Foundation NESTA, called The Missing £4 billion: making R&D work for the whole UK

This was covered by the Financial Times, complete with celebrity endorsement: Academic cited by Cummings wants to redraw map of research spending

Here is the Executive Summary:

The Missing £4 billion: making R&D work for the whole UK

The UK’s regional imbalances in economic performance are exacerbated by regional imbalances in R&D spending

There are two economies in the UK. Much of London, South East England and the East of England has a highly productive, prosperous knowledge-based economy. But in the Midlands and the North of England, in much of South West England and in Wales and Northern Ireland, the economy lags behind our competitors in Northern Europe. Scotland sits in between. In underperforming large cities, in towns that have never recovered from deindustrialisation, in rural and coastal fringes, weak innovation systems are part of the cause of low productivity economies.

The government supports regional innovation systems through its spending on public sector research and development (R&D). This investment is needed now more than ever; we have an immediate economic crisis because of the pandemic, but the long-term problems of the UK economy – a decade of stagnation of productivity growth, which led to stagnant wages and weak government finances, and persistent regional imbalances – remain. Government investment in R&D is highly geographically imbalanced. If the government were to spend at the same intensity in the rest of the country as it does in the wider South East of England, it would spend £4 billion more. This imbalance wastes an opportunity to use public spending to ‘level up’ areas with weaker economies and achieve economic convergence.

The UK’s research base has many strengths, some truly world leading. But three main shortcomings currently inhibit it from playing its full role in economic growth. It is too small for the size of the country, it is relatively weak in translational research and industrial R&D, and it is too geographically concentrated in already prosperous parts of the country, often at a distance from where business conducts R&D.

The UK’s R&D intensity is too low

The UK’s overall R&D intensity is low. Measured as a ratio to (pre-COVID-19 crisis) gross domestic product (GDP), the Organisation for Economic Co-operation and Development (OECD) average is 2.37 per cent. The UK, at 1.66 per cent, is closer to countries like Italy and Spain than Germany or France.

The UK government has committed to matching the current OECD average by 2027, pledging an increase in public spending to £22 billion by 2025. Looking internationally shows us that substantial increases in R&D intensity are possible. Austria, Belgium, Denmark and Korea have all dramatically increased R&D intensity in recent decades. The major part of these increases is funded by the private sector, but public sector increases are almost always required alongside or in advance of this. The ratio of R&D funding from the two sources is typically 2:1, and this is a good rule of thumb for considering how increased R&D might be funded in the UK.

The UK’s R&D is highly regionally imbalanced

Looking at both the total level of spending on R&D and the ratio of public to private R&D spending is a good way to classify innovation systems within regions.
• The South East and East of England are highly research intensive with high investment by the state combined with business investment exceeding what we would expect from a 2:1 ratio.
• London and Scotland receive above-average levels of state investment but have lower- than-average levels of business investment.
• The East Midlands, the West Midlands and North West England are business-led innovation regions with business investment in R&D at or above the UK average but low levels of public investment.
• Wales, Yorkshire and the Humber, and North East England are regional economies with notably low R&D intensities in both the market and non-market-led sectors.
• South West England and Northern Ireland sit between these two groups with similarly low levels of public investment but slightly higher private sector spending on R&D.

A single sentence can summarise the extent to which the UK’s public R&D spending is centralised in just three cities. The UK regions and subregions containing London, Oxford and Cambridge account for 46 per cent of public and charitable R&D in the UK, but just 31 per cent of business R&D and 21 per cent of the population.

How the current funding system has led to inequality

The current situation is the result of a combination of deliberate policy decisions and a natural dynamic in which these small preferences combined with initial advantages are reinforced with time.

For example, of a series of major capital investments in research infrastructure between 2007 and 2014, 71 per cent was made in London, the East and South East of England, through a process criticised by the National Audit Office. The need for continuing revenue funding to support these investments lock in geographical imbalances in R&D for many years.

Imbalanced investment in R&D is, at most, only part of why the UK’s regional economic divides widened in the past and have failed to close in recent decades. But it is a factor that the government can influence. It has failed to do so. Where attempts have been made to use R&D to balance the UK’s economic strengths, they have been insufficient in scale. For example, in the 2000s the English regional development agencies allocated funding with preference to regions with weaker economies, but their total R&D spend was equivalent to just 1.6 per cent of the national R&D budget. These efforts could never have hoped to succeed. Unsurprisingly, and in contrast to vastly larger schemes in Germany, they failed.

We need to do things differently

The sums needed to rebalance R&D spending across the nation are substantial. A crude calculation shows that to level up per capita public spending on R&D across the nations and regions of the UK to the levels currently achieved in London, the South East and East England, additional spending of more than £4 billion would be needed: £1.6 billion would need to go to the North of England, £1.4 billion to the Midlands, £420 million to Wales, £580 million to South West England and £250 million to Northern Ireland. Spending in Scotland would be largely unchanged.

These numbers give a sense of the scale of the problem, but equalising per capita spending is not the only possible criterion for redistributing funding.

We want people to explore other criteria that might guide thinking on where UK public sector and charity spending on R&D is generating the most value possible. The online tool accompanying this paper models different geographical distributions of public R&D spending obtained according to the weight attached to factors such as research excellence, following business R&D spending, targeting economic convergence and investing more where the manufacturing sector is stronger.

Importantly, we do not propose that UK R&D funding is assigned purely by algorithm. We have found that the scale of current imbalances in funding and the scale by which current spending fails to meet even its own stated goal of funding excellence are widely underappreciated. Our tool aims to inform and challenge, not replace existing systems.

To spread the economic benefits of innovation across the whole of the UK, changes are needed. These will include a commitment to greater transparency on how funding decisions are made in the government’s existing research funding agencies, an openness to a broader range of views on how this might change and devolution of innovation funding at a sufficient scale to achieve a better fit with local opportunities.

For the full paper, see The Missing £4 billion: making R&D work for the whole UK.

UK ARPA: An experiment in science policy?

This essay was published yesterday as part of a collection called “Visions of ARPA”, by the think-tank Policy Exchange, in response to the commitment of the UK government to introduce a new science funding agency devoted to high risk, high return projects, modelled on the US agency DARPA (originally ARPA). All the essays are well worth reading; the other authors are William Bonvillian, Julia King (Baroness Brown), two former science ministers, David Willetts and Jo Johnson, Nancy Rothwell and Luke Georghiou, and Tim Bradshaw. My thanks to Iain Sinclair for editing.

The UK’s research and innovation funding agency – UKRI – currently spends £7 billion a year supporting R&D in universities, public sector research establishments and private industry [1]. The Queen’s Speech in December set out an intention to increase substantially public funding for R&D, with the goal of raising the R&D intensity of the UK economy – including public and private spending – from its current level of 1.7% of GDP to a target of 2.4%. It’s in this context that we should judge the Government’s intention to introduce a new approach, providing “long term funding to support visionary high-risk, high-pay off scientific, engineering, and technology ideas”. What might this new approach – inevitably described as a British version of the legendary US funding agency DARPA – look like?

If we want to support visionary research, whose applications may be 10-20 years away, we should be prepared to be innovative – even experimental – in the way we fund research. And just as we need to be prepared for research not to work out as planned, we should be prepared to take some risks in the way we support it, especially if the result is less bureaucracy. There are some lessons to take from the long (and, it needs to be stressed, not always successful) history of ARPA/DARPA. To start with its operating philosophy, an agency inspired by ARPA should be built around the vision of the programme managers. But the operating philosophy needs to be underpinned by as enduring mission and clarity about who the primary beneficiaries of the research should be. And finally, there needs to be a deep understanding of how the agency fits into a wider innovation landscape. Continue reading “UK ARPA: An experiment in science policy?”

More reactions to “Resurgence of the Regions”

The celebrity endorsement of my “Resurgence of the regions” paper has led to a certain amount of press interest, which I summarise here.

The Times Higher naturally focuses on the research policy issues. I’m interviewed in the piece “Tory election victory sets scene for UK research funding battle”, which focuses on a perceived tension between a continuing emphasis on supporting “excellence” and disruptive innovation based on existing centres, and my agenda of boosting R&D in the regions to redress productivity imbalances.

Peter Franklin asks, in UnHerd, “Is this the Tories’ real manifesto?”

“Alas, no”, I expect is the answer to that question, but this article does a really great job of summarising the content of my paper. It also includes this hugely generous quotation from Stian Westlake: “The mini-storm over Dom Cummings citing @RichardALJones’s recent paper on innovation policy prompted me to re-read it, and *boy* is it good. I agree with more or less everything, and as a bonus it is delightfully written… On a couple of occasions I’ve been asked by a new science minister ‘what should I read on innovation?’, and it was always quite a hard question to answer. But now, I’d just say ‘read that’.”

I suspect Franklin’s excellent article was instrumental in focusing some wider attention on my paper. The Sunday Times’s Economics Editor, David Smith, agreed that “A renewed focus on innovation can deliver a resurgence in the regions”, while Oliver Wright, in the Times, focused on the industrial strategy implications of the net zero greenhouse gas target, and in particular nuclear energy, in a piece entitled “Reinvigorate north with nuclear power stations”.

It was left to Alan Lockey, writing in CapX, to point out the tension between the government activism I call for and more traditional laissez-faire Conservative attitudes, putting this tension at the centre of what he called “The coming battle for modern Conservatism”. On the one hand, Lockey described the arguments as being “a bit boring”, “comfort-zone industrial policy instincts of Ed Miliband-era social democracy” from “a hitherto politically obscure physicist”… but he also found it “as an object lesson in how to construct an expansive and data-rich case for systemic public policy change … pretty near faultless. The ideas too, I find to be entirely unproblematic”. As he later graciously put it on Twitter, “I was merely just trying to convey that it seemed less controversial perhaps to those of us who are, basically, boring social democrats who see nothing wrong with industrial activism!”

On being endorsed by Dominic Cummings

The former chief advisor to the Prime Minister, Dominic Cummings, wrote a blogpost yesterday about the need for leave voters to mobilise to make sure the Conservatives are elected on the 12 December. At the end of the post, he writes “Ps. If you’re interested in ideas about how the new government could really change our economy for the better, making it more productive and fairer, you’ll find this paper interesting. It has many ideas about long-term productivity, science, technology, how to help regions outside the south-east and so on, by a professor of physics in Sheffield”. He’s referring to my paper “A Resurgence of the Regions: rebuilding innovation capacity across the whole UK”.

As I said on Twitter,“Pleased (I think) to see my paper “Resurgence of the regions” has been endorsed in Dominic Cummings’s latest blog. Endorsement not necessarily reciprocated, but all parties need to be thinking about how to grow productivity & heal our national divides”.

I provided a longer reaction to a Guardian journalist, which resulted in this story today: Academic praised by Cummings is remain-voting critic of Tory plans. Here are the comments I made to the journalist which formed the basis of the story:

I’m pleased that Dominic Cummings has endorsed my paper “Resurgence of the regions”. I think the analysis of the UK’s current economic weaknesses is important and we should be talking more about it in the election campaign. I single out the terrible record of productivity growth since the financial crisis, the consequences of that in terms of flat-lining wages, the role of the weak economy in the fiscal difficulties the government has in balancing the books, and (as others have done) the profound regional disparities in economic performance across the country. I’d like to think that Cummings shares this analysis – the persistence of these problems, though, is hardly a great endorsement for the last 9.5 years of Conservative-led government.

In response to these problems we’re going to need some radical changes in the way we run our economy. I think science and innovation is going to be important for this, and clearly Cummings thinks that too. I also offer some concrete suggestions for how the government needs to be more involved in driving innovation – especially in the urgent problem we have of decarbonising our energy supply to meet the target of net zero greenhouse gas emissions by 2050. It’s good that the Conservative Party has signed up to a 2050 Net Zero Greenhouse Gas target, but the scale of the measures it proposes are disappointingly timid – as I explain in my paper, reaching this goal is going to take much more investment, and more direct state involvement in driving innovation to increase the scale and drive the cost down of low carbon energy. This needs to be a central part of a wider industrial strategy.

I welcome all three parties’ commitment to raise the overall R&D intensity of the economy (to 2.4% of GDP by 2027 for the Conservatives, 3% of GDP by 2030 for Labour, 2.4% by 2027 with longer term aspiration for 3% for the Lib Dems). The UK’s poor record of R&D investment compared to other developed countries is surely a big contributing factor to our stagnating productivity. But this is also a stretching target – we’re currently at 1.7%. It’s going to need substantial increases in public spending, but even bigger increases in R&D investment from the private sector, and we’re going to need to see much more concrete plans for how government might get this might happen. Again, my paper has some suggestions, with a particular focus on building new capacity in those parts of the country where very little R&D gets done – and which, not coincidentally, have the worst economic performance (Wales, Northern Ireland, the North of England in particular).

As for Cummings’s views on Brexit: I voted remain, not least because I thought that a “leave” vote would result in a period of very damaging political chaos for the UK. I can’t say that subsequent events have made me think I was wrong on that. I do think that it would be possible for the UK to do ok outside the EU, but to succeed post-Brexit we’ll need to stay close to Europe in matters such as scientific cooperation (preferably through associating with EU science programmes like the European Research Council),and in matters related to nuclear technology. We will need to be a country that welcomes talented people from overseas, and provides an attractive destination for overseas investment – particularly important for innovation, where more than half of the UK’s business R&D is done by overseas owned firms. The need to have a close relationship with our major trading partners will mean that we’ll need to stay in regulatory alignment with the EU (very important, for example, for the chemicals industry) and minimise frictions for industries, like the automotive industry where the UK is closely integrated into European supply chains, and in the high value knowledge based services which are so important for the UK economy. It doesn’t look like that’s the direction of travel the Conservatives are currently going down.

Whatever happens in the next election, anyone who has any ambition to heal the economic and social divides in this country needs to be thinking about the issues I raise in my paper.

What do we mean by scientific productivity – and is it really falling?

This is the outline of a brief talk I gave as part of the launch of a new Research on Research Institute, with which I’m associated. The session my talk was in was called “PRIORITIES: from data to deliberation and decision-making
. How can RoR support prioritisation & allocation by governments and funders?”

I want to focus on the idea of scientific productivity – how it is defined, and how we can measure it – and whether it is declining – and if it is, what can we do about it?

The output of science increases exponentially, by some measures…

…but what do we get back from that? What is the productivity of the scientific enterprise – the output of the enterprise, as defined by some measure of the output of science per unit input?

It depends on what we think the output of science is, of course.

We could be talking of some measure of the new science being produced and its impact within the scientific community.

But I think many of us – from funders to the wider publics who support that science – might also want to look outside the scientific community. How can we measure the effectiveness with which scientific advances are translated into wider socio-economic goals? As the discourses of “grand challenges” and “mission driven” research become more widely taken up, how will we tell whether those challenges and missions have been met?

There is a gathering sense that the productivity of the global scientific endeavour is declining or running into diminishing returns. A recent article by Michael Nielsen and Patrick Collison asserted that “Science is getting less bang for its buck”, while a group of distinguished economists have answered in the affirmative their own question: “Are ideas getting harder to find?” This connects to the view amongst some economists, that we have seen the best of economic growth and are living in a new age of stagnation.

Certainly the rate of innovation in some science-led industries seems to be slowing down. The combination of Moore’s law and Dennard scaling which brought us exponential growth in computing power in the 80’s and 90’s started to level off around 2004 and has since slowed to a crawl, despite continuing growth in resources devoted to it. Continue reading “What do we mean by scientific productivity – and is it really falling?”

If new nuclear doesn’t get built, it will be fossil fuels, not renewables, that fill the gap

The UK’s programme to build a new generation of nuclear power stations is in deep trouble. Last month, Hitachi announced that it is pulling out of a project to build two new nuclear power stations in the UK; Toshiba had already announced last year that it was pulling out of the Moorside project.

The reaction to this news has been largely one of indifference. In one sense this is understandable – my own view is that it represents the inevitable unravelling of an approach to nuclear new build that was monumentally misconceived in the first place, maximising costs to the energy consumer while minimising benefits to UK industry. But many commentators have taken the news to indicate that nuclear power is no longer needed at all, and that we can achieve our goal of decarbonising our energy economy entirely on the basis of renewables like wind and solar. I think this argument is wrong. We should accelerate the deployment of wind and solar, but this is not enough for the scale of the task we face. The brutal fact is that if we don’t deploy new nuclear, it won’t be renewables that fill the gap, but more fossil fuels.

Let’s recall how much energy the UK actually uses, and where it comes from. In 2017, we used just over 2200 TWh. The majority of the energy we use – 1325 TWh – is in the form of directly burnt oil and gas. 730 TWh of energy inputs went in to produce the 350 TWh of electricity we used. Of that 350 TWh, 70 TWh came from nuclear, 61.5 TWh came from wind and solar, and another 6 TWh from hydroelectricity. Right now, our biggest source of low carbon electricity is nuclear energy.

But most of that nuclear power currently comes from the ageing fleet of Advanced Gas Cooled reactors. By 2030, all of our AGRs will be retired, leaving only Sizewell B’s 1.2 GW of capacity. In 2017, the AGRs generated a bit more than 60 TWh – by coincidence, almost exactly the same amount of electricity as the total from wind and solar.

The growth in wind and solar power in the UK in recent years has been tremendous – but there are two things we need to stress. Firstly, taking out the existing nuclear AGR fleet – as has to happen over the next decade – would entirely undo this progress, without nuclear new build. Secondly, in the context of the overall scale of the challenge of decarbonisation, the contribution of both nuclear and renewables to our total energy consumption remains small – currently less than 16%.

One very common response to this issue is to point out that the cost of renewables has now fallen so far that at the margin, it’s cheaper to bring new renewable capacity online than to build new nuclear. But this argument from marginal cost is only valid if you are only interested in marginal changes. If we’re happy with continuing to get around 80% of our energy from fossil fuels, then the marginal cost argument makes sense. But if we’re serious about making real progress towards decarbonisation – and I think the urgency of the climate change issue and the scale of the downside risks means we should be – then what’s important isn’t the marginal cost of low-carbon energy, but the whole system cost of replacing, not a few percent, but close to 100% of our current fossil fuel use.

So how much more wind and solar energy capacity can we realistically expect to be able to build? The obvious point here is that the total amount is limited – the UK is a small, densely populated, and not very sunny island – even in the absence of economic constraints, there are limits to how much of it can be covered in solar cells. And although its position on the fringes of the Atlantic makes it a very favourable location for offshore wind, there are not unlimited areas of the relatively shallow water that current offshore wind technology needs.

Currently, the current portfolio of offshore wind projects amounts to a capacity of 33.2 GW, with one further round of 7 GW planned. According to the most recent information I can find, “Industry says it could deliver 30GW installed by 2030”. If we assume the industry does a bit better than this, and delivers the entire current portfolio, that would produce about 120 TWh a year.

Solar energy produced 11.5 TWh in 2017. The very fast rate of growth that led us to that point has levelled off, due to changes in the subsidy regime. Nonetheless, there’s clearly room for further expansion, both of rooftop solar and grid scale installations. The most aggressive of the National Grid scenarios envisages a tripling of solar by 2030, to 32 TWh.

Thus by 2030, in the best case for renewables, wind and solar produce about 150 TWh of electricity, compared to our current total demand for electricity of 350 TWh. We can reasonably expect demand for electricity, all else equal, to slowly decrease as a result of efficiency measures. Estimating this by the long term rate of reduction of energy demand of 2% a year, we might hope to drive demand down to around 270 TWh by 2030. Where does that leave us? With all the new renewables, together with nuclear generation at its current level, we’d be generating 220 TWh out of 270 TWh. Adding on some biomass generation (currently about 35 TWh, much of which comes from burning environmentally dubious imported wood-chips), 6 TWh of hydroelectricity and some imported French nuclear power, and the job of decarbonising our electricity supply is nearly done. What would we do without the 70 TWh of nuclear power? We’d have to keep our gas-fired power stations running.

But, but, but… most of the energy we use isn’t in the form of electricity – it’s directly burnt gas and oil. So if we are serious about decarbonising the whole energy system, we need to be reducing that massive 1325 TWh of direct fossil fuel consumption. The most obvious way of doing that is by shifting from directly burning oil to using low-carbon electricity. This means that to get anywhere close to deep decarbonisation we are going to need to increase our consumption of electricity substantially – and then increase our capacity for low-carbon generation to match.

This is one driving force for the policy imperative to move away from internal combustion engines to electric vehicles. Despite the rapid growth of electric vehicles, we still use less than 0.2 TWh charging our electric cars. This compares with a total of 4.8 TWh of electricity used for transport, mostly for trains (at this point we should stop and note that we really should electrify all our mainline and suburban train-lines). But these energy totals are dwarfed by the 830 TWh of oil we burn in cars and trucks.

How rapidly can we expect to electrify vehicle transport? This is limited by economics, by the world capacity to produce batteries, by the relatively long lifetime of our vehicle stock, and by the difficulty of electrifying heavy goods vehicles. The most aggressive scenario looked at by the National Grid suggests electric vehicles consuming 20 TWh by 2030, a more than one-hundred-fold increase on today’s figures, representing 44% a year growth compounded. Roughly speaking, 1 TWh of electricity used in an electric vehicle displaces 3.25 TWh of oil – electric motors are much more efficient at energy conversion than internal combustion engines. So even at this aggressive growth rate, electric vehicles will only have displaced 8% of the oil burnt for transport. Full electrification of transport would require more than 250 TWh of new electricity generation, unless we are able to generate substantial new efficiencies.

Last, but not least, what of the 495 TWh of gas we burn directly, to heat our homes and hot water, and to drive industrial processes? A serious programme of home energy efficiency could make some inroads into this, we could make more use of ground source heat pumps, and we could displace some with hydrogen, generated from renewable electricity (which would help overcome the intermittency problem) or (in the future, perhaps) process heat from high temperature nuclear power stations. In any case, if we do decarbonise the domestic and industrial sectors currently dominated by natural gas, several hundred more TWh of electricity will be required.

So achieve the deep decarbonisation we need by 2050, electricity generation will need to be more than doubled. Where could that come from? A further doubling of solar energy from our already optimistic 2030 estimate might take that to 60 TWh. Beyond that, for renewables to make deep inroads we need new technologies. Marine technologies – wave and tide – have potential, but in terms of possible capacity deep offshore wind perhaps offers the biggest prize, with the Scottish Government estimating possible capacities up to 100 GW. But this is a new and untried technology, which will certainly be very much more expensive than current offshore wind. The problem of intermittency also substantially increases the effective cost of renewables at high penetrations, because of the need for large scale energy storage and redundancy. I find it difficult to see how the UK could achieve deep decarbonisation without a further expansion of nuclear power.

Coming back to the near future – keeping decarbonisation on track up to 2030 – we need to bring at least enough new nuclear on stream to replace the lost generation capacity of the AGR fleet, and preferably more, while at the same time accelerating the deployment of renewables. We need to be honest with ourselves about how little of our energy currently comes from low-carbon sources; even with the progress that’s been made deploying renewable electricity, most of our energy still arises from directly burning oil and gas. If we’re serious about decarbonisation, we need the rapid deployment of all low carbon energy sources.

And yet, our current policy for nuclear power is demonstrably failing. How should we do things differently, more quickly and at lower cost, to reboot the UK’s nuclear new build programme? That will be the subject of another post.

Notes on sources.
Current UK energy statistics are from the 2018 edition of the Digest of UK Energy Statistics.
Status of current and planned offshore wind capacity, from Crown Estates consultation.
National Grid future energy scenarios.
Oil displaced by electric vehicles – current estimates based on worldwide data, as reported by Bloomberg New Energy Finance.

What drives productivity growth in the UK economy?

How do you get economic growth? Economists have a simple answer – you can put in more labour, by having more people working for longer hours, or you can put in more capital, building more factories or buying more machines, or – and here things get a little more sketchy – you can find ways of innovating, of getting more outputs out of the same inputs. In the framework economists have developed for thinking about economic growth, the latter is called “total factor productivity”, and it is loosely equated with technological progress, taking this in its broadest sense. In the long run it is technological progress that drives improved living standards. Although we may not have a great theoretical handle on where total factor productivity comes from, its empirical study should tell us something important about the sources of our productivity growth. Or, in our current position of stagnation, why productivity growth has slowed down so much.

Of course, the economy is not a uniform thing – some parts of it may be showing very fast technological progress, like the IT industry, while other parts – running restaurants, for example, might show very little real change over the decades. These differences emerge from the sector based statistics that have been collected and analysed for the EU countries by the EU KLEMS Growth and Productivity Accounts database.

Sector percentage of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015. Data from EU KLEMS Growth and Productivity Accounts database.

Here’s a very simple visualisation of some key results of that data set for the UK. For each sector, the relative importance of the sector to the economy as a whole is plotted on the x-axis, expressed as a percentage of the gross value added of the whole economy. On the y-axis is plotted the total change in total factor productivity over the whole 17 year period covered by the data. This, then, is the factor by which that sector has produced more output than would be expected on the basis of additional labour and capital. This may tell us something about the relative effectiveness of technological progress in driving productivity growth in each of these sectors.

Broadly, one can read this graph as follows: the further right a sector is, the more important it is as a proportion of the whole economy, while the nearer the top a sector is, the more dynamic its performance has been over the 17 years covered by the data. Before a more detailed discussion, we should bear in mind some caveats. What goes into these numbers are the same ingredients as go into the measurement of GDP as a whole, so all the shortcomings of that statistic are potentially issues here.

A great starting point for understanding these issues is Diane Coyle’s book GDP: a brief but affectional history. The first set of issues concern what GDP measures and what it doesn’t measure. Lots of kinds of activity are important for the economy, but they only tend to count in GDP if money changes hands. New technology can shift these balances – if supermarkets replace humans at the checkouts by machines, the groceries still have to be scanned, but now the customer is doing the work for nothing.

Then there are some quite technical issues about how the measurements are done. This includes properly accounting for improvements in quality where technology is advancing very quickly; failing to fully account for the increased information transferred through a typical internet connection will mean that overall inflation will be overestimated, and productivity gains in the ICT will be understated (see e.g. A Comparison of Approaches to Deflating Telecoms Services Output, PDF). For some of the more abstract transactions in the modern economy – particularly in the banking and financial services sector, some big assumptions have to be made about where and how much value is added. For example, the method used to estimate the contribution of financial services – FISIM, for “Financial intermediation services indirectly measured” – has probably materially overstated the contribution of financial services to GDP by not handling risk correctly, as argued in this recent ONS article.

Finally, there’s the big question of whether increases in GDP correspond to increases in welfare. The general answer to this question is, obviously, not necessarily. Unlike some commentators, I don’t take this to mean that we shouldn’t take any notice of GDP – it is an important indicator of the health of an economy and its potential to supply people’s needs. But it does need looking at critically. A glazing company that spent its nights breaking shop windows and its days mending them would be increasing GDP, but not doing much for welfare – this is a ridiculous example, but there’s a continuum between what economist William Baumol called unproductive entrepreneurship, the more extractive varieties of capitalism documented by Acemoglu and Robinson – and outright organised crime.

To return to our plot, we might focus first on three dynamic sectors – information and communications, manufacturing, and professional, scientific, technical and admin services. Between them, these sectors account for a bit more than a quarter of the economy, and have shown significant improvements in total factor productivity over the period. In this sense it’s been ICT, manufacturing and knowledge-based services that have driven the UK economy over this period.

Next we have a massive sector that is important, but not yet dynamic, in the sense of having demonstrated slightly negative total factor productivity growth over the period. This comprises community, personal and social services – notably including education, health and social care. Of course, in service activities like health and social care it’s very easy to mischaracterise as a lowering of productivity a change that actually corresponds to an increase in welfare. On the other hand, I’ve argued elsewhere that we’ve not devoted enough attention to the kinds of technological innovation in health and social care sectors that could deliver genuine productivity increases.

Real estate comprises a sector that is both significant in size, and has shown significant apparent increases in total factor productivity. This is a point at which I think one should question the nature of the value added. A real estate business makes money by taking a commission on property transactions; hence an increase in property prices, given constant transaction volume, leads to an apparent increase in productivity. Yet I’m not convinced that a continuous increase in property prices represents the economy generating real value for people.

Finance and insurance represents a significant part of the economy – 7% – but its overall long term increase in total factor productivity is unimpressive, and probably overstated. The importance of this sector in thinking about the UK economy represents a distortion of our political economy.

The big outlier at the bottom left of the plot is mining and quarrying, whose total factor productivity has dropped by 50% – what isn’t shown is that its share of the economy has substantially fallen over the period too. The biggest contributor to this sector is North Sea oil, whose production peaked around 2000 and which has since been rapidly falling. The drop in total factor productivity does not, of course, mean that technological progress has gone backwards in this sector. Quite the opposite – as the easy oil fields are exhausted, more resource – and better technology – are required to extract what remains. This should remind us of one massive weakness in GDP as a sole measure of economic progress – it doesn’t take account of the balance sheet, of the non-renewable natural resources we use to create that GDP. The North Sea oil has largely gone now and this represents an ongoing headwind to the UK economy that will need more innovation in other sectors to overcome.

This approach is limited by the way the economy needs to be divided up into sectors. Of course, this sectoral breakdown is very coarse – within each sector there are likely to be outliers with very high total productivity growth which dramatically pull up the average of the whole sector. More fundamentally, it’s not obvious that the complex, networked nature of the modern economy is well captured by these rather rigid barriers. Many of the most successful manufacturing enterprises add big value to their products with the services that come attached to them, for example.

We can look into the EU Klems data at a slightly finer grained level; the next plot shows importance and dynamism for the various subsectors of manufacturing. This shows well the wide dispersions within the overall sectors – and of course within each of these subsectors there will be yet more dispersion.

Sub-sector fraction of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015 for subsectors of manufacturing. Data from EU KLEMS Growth and Productivity Accounts database.

The results are perhaps unsurprising – areas traditionally considered part of high value manufacturing – transport equipment and chemicals, which include aerospace, automotive, pharmaceuticals and speciality chemicals – are found in the top right quadrant, important in terms of their share of the economy, dynamic in terms of high total factor productivity growth. The good total factor productivity performance of textiles is perhaps more surprising, for an area often written off as part of our industrial heritage. It would be interesting to look in more detail at what’s going on here, but I suspect that a big part of it could be the value that can be added by intangibles like branding and design. Total factor productivity is not just about high tech and R&D, important though the latter is.

Clearly this is a very superficial look at a very complicated area. Even within the limitations of the EU Klems data set, I’ve not considered how rates of TFP growth have varied by time – before and after the global financial crisis, for example. Nor have I considered the way shifts between sectors have contributed to overall changes in productivity across the economy – I’ve focused only on rates, not on starting levels. And of course, we’re talking here about history, which isn’t always a good guide to the future, where there will be a whole new set of technological opportunities and competitive challenges. But as we start to get serious about industrial strategy, these are the sorts of questions that we need to be looking into.

Innovation, regional economic growth, and the UK’s productivity problem

A week ago I gave a talk with this title at a conference organised by the Smart Specialisation Hub. This organisation was set up to help regional authorities in developing their economic plans; given the importance of local industrial strategies in the government’s overall industrial strategy its role becomes all the more important.

Other speakers at the conference represented central government, the UK’s innovation agency InnovateUK, and the Smart Specialisation Hub itself. Representing no-one but myself, I was able to be more provocative in my own talk, which you can download here (PDF, 4.7 MB).

My talk had four sections. Opening with the economic background, I argued that the UK’s stagnation in productivity growth and regional economic inequality has broken our political settlement. Looking at what’s going on in Westminster at the moment, I don’t think this is an exaggeration.

I went on to discuss the implications of the 2.4% R&D target – it’s not ambitious by developed world standards, but will be a stretch from our current position, as I discussed in an earlier blogpost: Reaching the 2.4% R&D intensity target.

Moving on to the regional aspects of research and innovation policy, I argued (as I did in this blog post: Making UK Research and Innovation work for the whole UK) that the UK’s regional concentration of R&D (especially public sector) is extreme and must be corrected. To illustrate this point, I used this version of Tom Forth’s plot splitting out the relative contributions of public and private sector to R&D regionally.

I argued that this plot gives a helpful framework for thinking about the different policy interventions needed in different parts of the country. I summarised this in this quadrant diagram [1].

Finally, I discussed the University of Sheffield’s Advanced Manufacturing Research Centre as an example of the kind of initiative that can help regenerate the economy of a de-industrialised area. Here a focus on translational research & skills at all levels both drives inward investment by international firms at the technology frontier & helps the existing business base upgrade.

I set this story in the context of Shih and Pisano’s notion of the “industrial commons” [2] – a set of resources that supports the collective knowledge, much of it tacit, that drives innovations in products and processes in a successful cluster. A successful industrial commons is rooted in a combination of large anchor companies & institutions, networks of supplying companies, R&D facilities, informal knowledge networks and formal institutions for training and skills. I argue that a focus of regional economic policy should be a conscious attempt to rebuild the “industrial commons” in an industrial sector which allows the opportunities of new technology to be embraced, yet which works with grain of the existing industry and institutional base. The “smart specialisation” framework is a good framework for identifying the right places to look.

1. As a participant later remarked, I’ve omitted the South East from this diagram – it should be in the bottom right quadrant, albeit with less business R&D than East Anglia, though with the benefits more widely spread.

2. See Pisano, G. P., & Shih, W. C. (2009). Restoring American Competitiveness. Harvard Business Review, 87(7-8), 114–125.

The semiconductor industry and economic growth theory

In my last post, I discussed how “econophysics” has been criticised for focusing on exchange, not production – in effect, for not concerning itself with the roots of economic growth in technological innovation. Of course, some of that technological innovation has arisen from physics itself – so here I talk about what economic growth theory might learn from an important episode of technological innovation with its origins in physics – the development of the semiconductor industry.

Economic growth and technological innovation

In my last post, I criticised econophysics for not talking enough about economic growth – but to be fair, it’s not just econophysics that suffers from this problem – mainstream economics doesn’t have a satisfactory theory of economic growth either. And yet economic growth and technological innovation provides an all-pervasive background to our personal economic experience. We expect to be better off than our parents, who were themselves better off than our grandparents. Economics without a theory of growth and innovation is like physics without an arrow of time – a marvellous intellectual construction that misses the most fundamental observation of our lived experience.

Defenders of economics at this point will object that it does have theories of growth, and there are even some excellent textbooks on the subject [1]. Moreover, they might remind us, wasn’t the Nobel Prize for economics awarded this year to Paul Romer, precisely for his contribution to theories of economic growth? This is indeed so. The mainstream approach to economic growth pioneered by Robert Solow regarded technological innovation as something externally imposed, and Romer’s contribution has been to devise a picture of growth in which technological innovation arises naturally from the economic models – the “post-neoclassical endogenous growth theory” that ex-Prime Minister Gordon Brown was so (unfairly) lampooned for invoking.

This body of work has undoubtedly highlighted some very useful concepts, stressing the non-rivalrous nature of ideas and the economic basis for investments in R&D, especially for the day-to-day business of incremental innovation. But it is not a theory in the sense a physicist might understand that – it doesn’t explain past economic growth, so it can’t make predictions about the future.

How the information technology revolution really happened

Perhaps to understand economic growth we need to turn to physics again – this time, to the economic consequences of the innovations that physics provides. Few would disagree that a – perhaps the – major driver of technological innovation, and thus economic growth, over the last fifty years has been the huge progress in information technology, with the exponential growth in the availability of computing power that is summed up by Moore’s law.

The modern era of information technology rests on the solid-state transistor, which was invented by William Shockley at Bell Labs in the late 1940’s (with Brattain and Bardeen – the three received the 1956 Nobel Prize for Physics). In 1956 Shockley left Bell Labs and went to Palo Alto (in what would later be called Silicon Valley) to found a company to commercialise solid-state electronics. However, his key employees in this venture soon left – essentially because he was, by all accounts, a horrible human being – and founded Fairchild Semiconductors in 1957. Key figures amongst those refugees were Gordon Moore – of eponymous law fame – and Robert Noyce. It was Noyce who, in 1960, made the next breakthrough, inventing the silicon integrated circuit, in which a number of transistors and other circuit elements were combined on a single slab of silicon to make a integrated functional device. Jack Kilby, at Texas Instruments, had, more or less at the same time, independently developed an integrated circuit on germanium, for which he was awarded the 2000 Physics Nobel prize (Noyce, having died in 1990, was unable to share this). Integrated circuits didn’t take off immediately, but according to Kilby it was their use in the Apollo mission and the Minuteman ICBM programme that provided a turning point in their acceptance and widespread use[2] – the Minuteman II guidance and control system was the first mass produced computer to rely on integrated circuits.

Moore and Noyce founded the electronics company Intel in 1968, to focus on developing integrated circuits. Moore had already, in 1965, formulated his famous law about the exponential growth with time of the number of transistors per integrated circuit. The next step was to incorporate all the elements of a computer on a single integrated circuit – a single piece of silicon. Intel duly produced the first commercially available microprocessor – the 4004 – in 1971, though this had been (possibly) anticipated by the earlier microprocessor that formed the flight control computer for the F14 Tomcat fighter aircraft. From these origins emerged the microprocessor revolution and personal computers, with its giant wave of derivative innovations, leading up to the current focus on machine learning and AI.

Lessons from Moore’s law for growth economics

What should clear from this very brief account is that classical theories of economic growth cannot account for this wave of innovation. The motivations that drove it were not economic – they arose from a powerful state with enormous resources at its disposal pursuing complex, but entirely non-economic projects – such as the goal of being able to land a nuclear weapon on any point of the earth’s surface with an accuracy of a few hundred meters.

Endogenous growth theories perhaps can give us some insight into the decisions companies made about R&D investment and the wider spillovers that such spending led to. They would need to take account of the complex institutional landscape that gave rise to this innovation. This isn’t simply a distinction between public and private sectors – the original discovery of the transistor was made at Bell Labs – nominally in the private sector, but sustained by monopoly rents arising from government action.

The landscape in which this innovation took place seems much more complex than growth economics, with its array of firms employing undifferentiated labour, capital, all benefiting from some kind of soup of spillovers seems able to handle. Semiconductor fabs are perhaps the most capital intensive plants in the world, with just a handful of bunny-suited individuals tending a clean-room full of machines that individually might be worth tens or even hundreds of millions of dollars. Yet the value of those machines represents, as much as anything physical, the embodied value of the intangible investments in R&D and process know-how.

How are the complex networks of equipment and materials manufacturers coordinated to make sure technological advances in different parts of this system happen at the right time and in the right sequence? These are independent companies operating in a market – but the market alone has not been sufficient to transmit the information needed to keep it coordinated. An enormously important mechanism for this coordination has been the National Technology Roadmap for Semiconductors (later the International Technology Roadmap for Semiconductors), initiated by a US trade body, the Semiconductor Industry Association. This was an important social innovation which allowed companies to compete in meeting collaborative goals; it was supported by the US government by the relaxation of anti-trust law and the foundation of a federally funded organisation to support “pre-competitive” research – SEMATECH.

The involvement of the US government reflected the importance of the idea of competition between nation states in driving technological innovation. Because of the cold war origins of the integrated circuits, the original competition was with the Soviet Union, which created an industry to produce ICs for military use, based around Zelenograd. The degree to which this industry was driven by indigenous innovation as against the acquisition of equipment and know-how from the west isn’t clear to me, but it seems that by the early 1980’s the gap between Soviet and US achievements was widening, contributing to the sense of stagnation of the later Brezhnev years and the drive for economic reform under Gorbachev.

From the 1980’s, the key competitor was Japan, whose electronics industry had been built up in the 1960’s and 70’s driven not by defense, but by consumer products such as transistor radios, calculators and video recorders. In the mid-1970’s the Japanese government’s MITI provided substantial R&D subsidies to support the development of integrated circuits, and by the late 1980’s Japan appeared within sight of achieving dominance, to the dismay of many commentators in the USA.

That didn’t happen, and Intel still remains at the technological frontier. Its main rivals now are Korea’s Samsung and Taiwan’s TSMC. Their success reflects different versions of the East Asian developmental state model; Samsung is Korea’s biggest industrial conglomerate (or chaebol), whose involvement in electronics was heavily sponsored by its government. TSMC was a spin-out from a state-run research institute in Taiwan, ITRI, which grew by licensing US technology and then very effectively driving process improvements.

Could one build an economic theory that encompasses all this complexity? For me, the most coherent account has been Bill Janeway’s description of the way government investment combines with the bubble dynamics that drives venture capitalism, in his book “Doing Capitalism in the Innovation Economy”. Of course, the idea that financial bubbles are important for driving innovation is not new – that’s how the UK got a railway network, after all – but the econophysicist Didier Sornette has extended this to introduce the idea of a “social bubble” driving innovation[3].

This long story suggests that the ambition of economics to “endogenise” innovation is a bad idea, because history tells us that the motivations for some of the most significant innovations weren’t economic. To understand innovation in the past, we don’t just need economics, we need to understand politics, history, sociology … and perhaps even natural science and engineering. The corollary of this is that devising policy solely on the basis of our current theories of economic growth is likely to lead to disappointing outcomes. At a time when the remarkable half-century of exponential growth in computing power seems to be coming to an end, it’s more important than ever to learn the right lessons from history.

[1] I’ve found “Introduction to Modern Economic Growth”, by Daron Acemoglu, particularly useful

[2] Jack Kilby: Nobel Prize lecture, https://www.nobelprize.org/uploads/2018/06/kilby-lecture.pdf

[3] See also that great authority, The Onion “Recession-Plagued Nation Demands New Bubble to Invest In

The Physics of Economics

This is the first of two posts which began life as a single piece with the title “The Physics of Economics (and the Economics of Physics)”. In the first section, here, I discuss some ways physicists have attempted to contribute to economics. In the second half, I turn to the lessons that economics should learn from the history of a technological innovation with its origin in physics – the semiconductor industry.

Physics and economics are two disciplines which have quite a lot in common – they’re both mathematical in character, many of their practitioners are not short of intellectual self-confidence – and they both have imperialist tendencies towards their neighbouring disciplines. So the interaction between the two fields should be, if nothing else, interesting.

The origins of econophysics

The most concerted attempt by physicists to colonise an area of economics is in the area of the behaviour of financial markets – in the field which calls itself “econophysics”. Actually at its origins, the traffic went both ways – the mathematical theory of random walks that Einstein developed to explain the phenomenon of Brownian motion had been anticipated by the French mathematician Bachelier, who derived the theory to explain the movements of stock markets. Much later, the economic theory that markets are efficient brought this line of thinking back into vogue – it turns out that financial markets can be quite often modelled as simple random walks – but not quite always. The random steps that markets take aren’t drawn from a Gaussian distribution – the distribution has “fat tails”, so rare events – like big market crashes – aren’t anywhere like as rare as simple theories assume.

Empirically, it turns out that the distributions of these rare events can sometimes be described by power laws. In physics power laws are associated with what are known as critical phenomena – behaviours such as the transition from a liquid to a gas or from a magnet to a non-magnet. These phenomena are characterised by a certain universality, in the sense that the quantitative laws – typically power laws – that describe the large scale behaviour of these systems doesn’t strongly depend on the details of the individual interactions between the elementary objects (the atoms and molecules, in the case of magnetism and liquids) whose interaction leads collectively to the larger scale phenomenon we’re interested in.

For “econophysicists” – whose background often has been in the study of critical phenomenon – it is natural to try and situate theories of the movements of financial markets in this tradition, finding analogies with other places where power laws can be found, such as the distribution of earthquake sizes and the behaviour of sand-piles. In terms of physicists’ actual impact on participants in financial markets, though, there’s a paradox. Many physicists have found (often very lucrative) employment as quantitative traders, but the theories that academic physicists have developed to describe these markets haven’t made much impact on the practitioners of financial economics, who have their own models to describe market movements.

Other ideas from physics have made their way into discussions about economics. Much of classical economics depends on ideas like the “representative household” or the “representative firm”. Physicists with a background in statistical mechanics recognise this sort of approach as akin to a “mean field theory”. The idea that a complex system is well represented by its average member is one that can be quite fruitful, but in some important circumstances fails – and fails badly – because the fluctuations around the average become as important as the average itself. This motivates the idea of agent based models, to which physicists bring the hope that even simple “toy” models can bring insight. The Schelling model is one such very simple model that came from economics, but which has a formal similarity with some important models in physics. The study of networks is another place where one learns that the atypical can be disproportionately important.

If markets are about information, then physics should be able to help…

One very attractive emerging application of ideas from physics to economics concerns the place of information. Friedrich Hayek stressed the compelling insight that one can think of a market as a mechanism for aggregating information – but a physicist should understand that information is something that can be quantified, and (via Shannon’s theory) that there are hard limits on how much information can transmitted in a physical system . Jason Smith’s research programme builds on this insight to analyse markets in terms of an information equilibrium[1].

Some criticisms of econophysics

How significant is econophysics? A critique from some (rather heterodox) economists – Worrying trends in econophysics – is now more than a decade old, but still stings (see also this commentary from the time from Cosma Shalizi – Why Oh Why Can’t We Have Better Econophysics? ). Some of the criticism is methodological – and could be mostly summed up by saying, just because you’ve got a straight bit on a log-log plot doesn’t mean you’ve got a power law. Some criticism is about the norms of scholarship – in brief: read the literature and stop congratulating yourselves for reinventing the wheel.

But the most compelling criticism of all is about the choice of problem that econophysics typically takes. Most attention has been focused on the behaviour of financial markets, not least because these provide a wealth of detailed data to analyse. But there’s more to the economy – much, much more – than the financial markets. More generally, the areas of economics that physicists have tended to apply themselves to have been about exchange, not production – studying how a fixed pool of resources can be allocated, not how the size of the pool can be increased.

[1] For a more detailed motivation of this line of reasoning, see this commentary, also from Cosma Shalizi on Francis Spufford’s great book “Red Plenty” – “In Soviet Union, Optimization Problem Solves You”.