Reaching the 2.4% R&D intensity target

I had a rather difficult and snowy journey to London yesterday to give evidence to the House of Commons Business, Energy and Industrial Select Committee (video here, from 11.10). The subject was Industrial Strategy, and I was there as a member of the Industrial Strategy Commission, whose final report was published last November.

One of the questions I was asked was about the government’s new target of achieving an overall R&D intensity of 2.4% of GDP by 2027, as set out in its recent Industrial Strategy White Paper. Given that our starting point is about 1.7%, where it has been stuck for many years, was this target achievable? I replied a little non-committally. I’d reminded the committee about the long term fall in the UK’s R&D intensity since 1980, and the failure of what I’ve called “supply side innovation policy”, as I discussed at length in my paper The UK’s innovation deficit and how to repair it, which also highlights earlier governments’ failure to meet similar targets in the past. But it’s worth looking in more detail at the scale of the ambition here. My plot shows actual R&D spending up to 2015, and then the growth that would be required to achieve a 2.4% target.


R&D expenditure in the UK, adjusted for inflation. Data points show actual expenditure up to 2015 (source: ONS GERD statistics, March 2017 release), the lines are the projections of the growth that would be required to meet a target of 2.4% by 2027. Solid lines assume that GDP grows according to the latest predictions of the Office of Budgetary Responsibility up to 2022, and then at 1.6% pa thereafter. Dotted lines assume no growth in GDP at all.

One obvious point (and drawback) about expressing the target as a percentage of GDP is the worse the economy does, the less demanding the target is. I’ve taken account of this effect by modelling two scenarios. In the first one, I’ve assumed the rates of growth predicted out to 2022 by the Office of Budgetary Responsibility in their latest forecasts. These are not particularly optimistic, predicting annual growth rates in the range 1.3% – 1.8%; after 2022 I’ve assumed a constant growth rate of 1.6%, their final forecast value. In the second, I’ve assumed no growth in GDP at all. One hopes that this is a lower bound. In both cases, I’ve assumed that the overall balance between public and private sector funding for R&D remains the same.

Assuming the modest growth scenario, this means that total R&D spending needs to increase by £22 billion (41%) between 2015 and 2027, from £32 billion to £54 billion . To put this into perspective, between 2004 and 2015 spending increased by £6.6 billion (26%) in the 11 years from 2004 to 2015.

Some of this spending is directly controlled by the government. My plot splits the spending by where the research is carried out – in 2015, research in government and research council laboratories and in the universities amounted to one third of the total – £10.7 billion. This would need to increase by £7.4 billion.

As the plot shows, the government’s part of R&D has been essentially flat since 2004. The government has announced in the Autumn budget R&D increases amounting to £2.3 billion by 2021-2022. This is significant, but not enough – it would need to be more like £3.5 billion to meet the trajectory to the target. There is of course an issue about whether the research capacity of the UK is sufficient to absorb sums of this magnitude, and indeed whether we have the ability to make sensible choices about spending it.

But most of the spending is not in the control of the government – it happens in businesses. This needs to rise by about £14 billion, from £21 billion to £35 billion.

How could that happen? Graham Reid has set this out in an excellent article. There are essentially three options: existing businesses could increase their R&D, entirely new R&D intensive businesses could be created, and overseas companies could be persuaded to locate R&D facilities in the UK.

How can the government influence these decisions? One way is through direct subsidy, and it is perhaps not widely enough appreciated how much the government already does this. The R&D tax credit is essentially an indiscriminate subsidy for private sector R&D, whose value currently amounts to £2.9 billion. Importantly, this does not form part of the science budget. More targeted subsidies for private sector R&D come through collaborative research sponsored through InnovateUK (and, for the moment at least, the EU’s Framework Programme). In addition, private sector R&D can be supported indirectly through the provision of translational R&D centres whose costs are shared between government and industry, like Germany’s Fraunhofer Institutes. The UK’s Catapult Centres are an attempt to fill this gap, though on a scale that is as yet much too small.

Business R&D did slowly increase in real terms between 2004 and 2015. It is important to realise, though, that these gradual shifts in the aggregate figure conceal some quite big swings at a sector level.


R&D expenditure in selected sectors, from the November 2017 ONS BERD release. Figures have been adjusted to 2016 constant £s using GDP deflators.

This is illustrated in my second plot, showing inflation corrected business R&D spend from selected sectors. This shows the dramatic fall in pharmaceutical R&D – more than £1 billion, or 22% – from its 2011 peak, and the even more dramatic increase in automotive R&D – £2.5 billion, or 274%, from its 2006 low point. We need to understand what’s behind these swings in order to design policy to support R&D in each sector.

So, is the 2.4% target achievable? Possibly, and it’s certainly worth trying. But I don’t think we know how to do it now. The challenge to industrial strategy and science and innovation policy is to change that.

An intangible economy in a material world

Thirty years ago, Kodak dominated the business of making photographs. It made cameras, sold film, and employed 140,000 people. Now Instagram handles many more images than Kodak ever did, but when it was sold to Facebook in 2012, it employed 13 people. This striking comparison was made by Jaron Lanier in his book “You are not a gadget”, to stress the transition we have made to world in which value is increasingly created, not from plant and factories and manufacturing equipment, but from software, from brands, from business processes – in short, from intangibles. The rise of the intangible economy is the theme of a new book by Jonathan Haskel and Stian Westlake, “Capitalism without Capital”. This is a marvellously clear exposition of what makes investment in intangibles different from investment in the physical capital of plant and factories.

These differences are summed up in a snappily alliterative four S’s. Intangible assets are scalable: having developed a slick business process for selling over-priced coffee, Starbucks could very rapidly expand all over the world. The costs of developing intangible assets are sunk – having spent a lot of money building a brand, if the business doesn’t work out it’s much more difficult to recover much of those costs than it would be to sell a fleet of vans. And intangibles have spillovers – despite best efforts to protect intellectual property and keep the results secret, the new knowledge developed in a company’s research programme inevitably leaks out, benefitting other companies and society at large in ways that the originating firm can’t benefit from. And intangibles demonstrate synergies – the value of many ideas together is usually greater – often very much greater – than the sum of the parts.

These characteristics are a challenge to our conventional way of thinking about how economies work. Haskel and Westlake convincingly argue that these new characteristics could help explain some puzzling and unsatisfactory characteristics of our economy now – the stagnation we’re seeing in productivity growth, and the growth of inequality.

But how has this situation arisen? To what extent is the growth of the intangible economy inevitable, and how much arises from political and economic choices our society has made?

Let’s return to the comparison between Kodak and Instagram that Jaron Lanier makes – a comparison which I think is fundamentally flawed. The impact of mass digitisation of images is obvious to everyone who has a smartphone. But just because the images are digital, doesn’t mean they don’t need physical substrates, to capture the images, store and display them. Instagram may be a company based entirely on intangible assets, but it couldn’t exist without a massive material base. The smartphones themselves are physical artefacts of enormous sophistication, the product of supply chains of great complexity, with materials and components being made in many factories, that themselves use much expensive, sophisticated and very physical plant. Any while we might think of the “cloud” as some disembodied place where the photographs live, the cloud is, as someone said, just someone else’s computer – or more accurately, someone else’s giant, energy-hogging, server farm.

Much of the intangible economy only has value inasmuch as it is embodied in physical products. This, of course, has always been true. The price of an expensive clock made by an 18th century craftsman embodied the skill and knowledge of the craftsmen who made it, itself built up through their investments in learning the trade, the networks of expertise in which so much tacit knowledge was embedded, the value of the brand that the maker had built up. So what’s changed? We still live in a material world, and these intangible investments, important as they are, are still largely realised in physical objects.

It seems to me that the key difference isn’t so much that an intangible economy has grown in place of a material economy, it’s that we’ve moved to a situation in which the relative contributions of the material and the intangible have become much more separable. Airbnb isn’t an entirely ethereal operation; you might book your night away though a slick app, but it’s still bricks and mortar that you stay in. The difference between Airbnb and a hotel chain lies in the way ownership and management of the accommodation is separated from the booking and rating systems. How much this unbundling is inevitable, and how much is the result of political choices? This is the crucial question we need to answer if we are to design policies that will allow our economies and societies to flourish in this new environment.

These questions are dealt with early on in Haskel and Westlake’s book, but I think they deserve more analysis. One factor that Haskel and Westlake correctly point to is simply the continuing decrease in the cost of material stuff as a result of material innovation. This inevitably increases the relative value of services – delivered by humans – relative to material goods, a trend known as Baumol’s cost disease (a very unfortunate and misleading name, as I’ve discussed elsewhere). I think this has to be right, and it surely is an irreversible continuing trend.

But two other factors seem important too – both discussed by Haskel and Westlake, but without drawing out their full implications. One is the way the ICT industry has evolved, in a way that emphasises commodification of components and open standards. This has almost certainly been beneficial, and without it the platform companies that have depended on this huge material base would not have been able to arise and thrive in the same way. Was it inevitable that things turned out this way? I’m not sure, and it’s not obvious to me that if or when a new wave of ICT innovation arises (Majorana fermion based quantum computing, maybe?), to restart the now stuttering growth of computing power, this would unfold in the same way.

The other is the post-1980s business trend to “unbundling the corporation”. We’ve seen a systematic process by which the large, vertically integrated, corporations of the post-war period have outsourced and contracted out many of their functions. This process has been important in making intangible investments visible – in the days of the corporation, many activities (organisational development, staff training, brand building, R&D) were carried out within the firm, essentially outside the market economy – their contributions to the balance sheet being recognised only in that giant accounting fudge factor/balancing item, “goodwill”. As these functions become outsourced, they produce new, highly visible enterprises that specialise entirely in these intangible investments – management consultants, design houses, brand consultants and the like.

This process became supercharged as a result of the wave of globalisation we have just been through. The idea that one could unbundle the intangible and the material has developed in a context where manufacturing, also, could be outsourced to low-cost countries – particularly China. Companies now can do the market research and design to make a new product, outsource its manufacture, and then market it back in the UK. In this way the parts of the value of the product ascribed to the design and marketing can be separated from the value added by manufacturing. I’d argue that this has been a powerful driver of the intangible economy, as we’ve seen it in the developed world. But it may well be a transient.

On the one hand, the advantages of low-cost labour that drove the wave of manufacturing outsourcing will be eroded, both by a tightening labour market in far Eastern economies as they become more prosperous, and by a relative decline in the importance in the contribution of labour to the cost of manufacturing as automation proceeds. On the other hand, the natural tendency of those doing the manufacturing is to attempt to move to capture more of the value by doing their own design and marketing. In smartphones, for example, this road has already been travelled by Korean manufacturer Samsung, and we see Chinese companies like Xiami rapidly moving in the same direction, potentially eroding the margins of that champion of the intangible economy, Apple.

One key driver that might reverse the separation of the material from the intangible is the realisation that this unbundling comes with a cost. The importance of transaction costs in Coase’s theory of the firm is highlighted in Haskel and Westlake’s book, in a very interesting chapter which considers the best form of organisation for a firm operating in the intangible economy. Some argue that a lowering of transaction costs through the application of IT renders the firm more or less redundant, and we should, and will, move to a world where everyone is an independent entrepreneur, contracting out their skills to the highest bidder. As Haskel and Westlake point out, this hasn’t happened; organisations are still important, even in the intangible economy, and organisations need management, though the types of organisation and styles of managements that work best may have evolved. And, power matters, and big organisations can exert power and influence political systems in ways that little ones can not.

One type of friction that I think is particularly important relates to knowledge. The turn to market liberalism has been accompanied by a reification of intellectual property which I think is problematic. This is because the drive to consider chunks of protectable IP – patents – as tradable assets with an easily discoverable market value doesn’t really account for the synergies that Haskel and Westlake correctly identify as central to intangible assets. A single patent rarely has much value on its own – it gets its value as part of a bigger system of knowledge, some of it in the form of other patents, but much more of it as tacit knowledge held in individuals and networks.

The business of manufacturing itself is often the anchor for those knowledge networks. For an example of this, I’ve written elsewhere about the way in which the UK’s early academic lead in organic electronics didn’t translate into a business at scale, despite a strong IP position. The synergies with the many other aspects of the display industry, with its manufacturers and material suppliers already firmly located in the far east, were too powerful.

The unbundling strategy has its limits, and so too, perhaps, does the process of separating the intangible from the material. What is clear is that the way our economy currently deals with intangibles has led to wider problems, as Haskel and Westlake’s book makes clear. Intangible investments, for example into the R&D that underlies the new technologies that drive economic growth, do have special characteristics – spillovers and synergies – which lead our economies to underinvest in them, and that underinvestment must surely be a big driver of our current economic and political woes.

“Capitalism without Capital” really is as good as everyone is saying – it’s clear in its analysis, enormously helpful in clarifying assumptions and definitions that are often left unstated, and full of fascinating insights. It’s also rather a radical book, in an understated way. It’s difficult to read it without concluding that our current variety of capitalism isn’t working for us in the conditions we now find ourselves in, with growing inequality, stuttering innovation and stagnating economies. The remedies for this situation that the book proposes are difficult to disagree with; what I’m not sure about is whether they are far-reaching enough to make much difference.

Industrial strategy roundup

Last week saw the launch of the final report of the Industrial Strategy Commission, of which I’m a member. The full report (running to more than 100 pages) can be found here: Industrial Strategy Commission: Final report and executive summary. For a briefer, personal perspective, I wrote a piece for the Guardian website, concentrating on the aspects relating to science and innovation: The UK has the most regionally unbalanced economy in Europe. Time for change.

Our aim in doing this piece of work was to influence government policy, and that’s influenced the pace and timing of our work. The UK’s productivity problems have been in the news, following the Office of Budgetary Responsibility’s recognition that a return to pre-crisis levels of productivity growth is not happening any time soon. Both major political parties are now committed to the principle of industrial strategy; the current government will publish firm proposals in a White Paper, expected within the next few weeks. Naturally, we hope to influence those proposals, and to that end we’ve engaged over the summer with officials in BEIS and the Treasury.

Our formal launch event took place last week, hosted by the Resolution Foundation. The Business Secretary, Greg Clark, spoke at the event, an encouraging sign that our attempts to influence the policy process might have had some success. Even more encouragingly, the Minister said that he’d read the whole report.


The launch of the final report of the Industrial Strategy Commission, at the Resolution Foundation, London. From L to R, Lord David Willetts (Former Science Minister and chair of the Resolution Foundation), Diane Coyle (Member of the Industrial Strategy Commission), Dame Kate Barker (Chair of the Industrial Strategy Commission), Torsten Bell (Director of the Resolution Foundation), Greg Clark (Secretary of State for Business, Energy and Industrial Strategy), Richard Jones (Member of the Industrial Strategy Commission). Photo: Ruth Arnold.

Our aim was to help build a consensus across the political divide about industrial strategy – one strong conclusion we reached is that strategy will only be effective if it is applied consistently over the long term, beyond the normal political cycle. So it was good to see generally positive coverage in the press, from different political perspectives.

The Guardian focused on our recommendations about infrastructure: Tackle UK’s north-south divide with pledge on infrastructure, say experts. The Daily Telegraph, meanwhile, focused on productivity: Short-termism risks paralysing the UK’s industrial strategy, report warns: “Productivity was a major concern of the report, particularly the disparity between London and the rest of the UK…. Targeted investment to support high-value and technologically led industries was the best way to boost regional productivity, by generating clusters of research and development organisations outside of the London and the South-East, the report suggested.”

The Independent headlined its report with our infrastructure recommendation: UK citizens should be entitled to ‘universal basic infrastructure’, says independent commission. It also highlighted some innovation recommendations: “The state should use its purchasing power to create new markets and drive innovation in healthcare and technology to tackle climate change, the commission said”

Even the far-left paper the Morning Star was approving, though they wrongly reported that our commission had been set up by government (in fact, we are entirely independent, supported only by the Universities of Sheffield and Manchester). Naturally, they focused on our diagnoses of the current weaknesses of the UK economy, quoting comments on our work from Greg Clark’s Labour Party shadow, Rebecca Long Bailey: Use Autumn Statement to address long-term weaknesses in our economy, says Rebecca Long Bailey.

In the more specialist press, Research Fortnight concentrated on our recommendations for government structures, particularly the role of UKRI, the new umbrella organisation for research councils and funding agencies: Treasury should own industrial strategy, academics say.

There are a couple of other personal perspectives on the report from members of the commission. My Sheffield colleague Craig Berry focuses on the need for institutional reform in his blogpost Industrial strategy: here come the British, while Manchester’s Andy Westwood focuses on the regional dimensions of education policy (or lack of them, at present) in the Times Higher: Industrial Strategy Commission: it is time to address UK’s major regional inequalities.

Finally, Andy Westwood wrote a telling piece on the process itself, which resonated very strongly with all the Commission members: Why we wonk – a case study.

Should economists have seen the productivity crisis coming?

The UK’s post-financial crisis stagnation in productivity finally hit the headlines this month. Before the financial crisis, productivity grew at a steady 2.2% a year, but since 2009 growth has averaged only 0.3%. The Office of Budgetary Responsibility, in common with other economic forecasters, have confidently predicted the return of 2.2% growth every year since 2010, and every year they have been disappointed. This year, the OBR has finally faced up to reality – in its 2017 Forecast Evaluation Report, it highlights the failure of productivity growth to recover. The political consequences are severe – lower forecast growth means that there is less scope to relax austerity in public spending, and there is little hope that the current unprecedented stagnation in wage growth will recover.

Are the economists to blame for not seeing this coming? Aditya Chakrabortty thinks so, writing in a recent Guardian article: “A few days ago, the officials paid by the British public to make sure the chancellor’s maths add up admitted they had got their sums badly wrong…. The OBR assumed that post-crash Britain would return to normal and that normal meant Britain’s bubble economy in the mid-2000s. This belief has been rife across our economic establishment.”

The Oxford economist Simon Wren-Lewis has come to the defense of his profession. Explaining the OBR’s position, he writes “Until the GFC, macro forecasters in the UK had not had to think about technical progress and how it became embodied in improvements in labour productivity, because the trend seemed remarkably stable. So when UK productivity growth appeared to come to a halt after the GFC, forecasters were largely in the dark.”

I think this is enormously unconvincing. Economists are unanimous about the importance of productivity growth as the key driver of the economy, and agree that technological progress (sufficiently widely defined) is the key source of that productivity growth. Why, then, should macro forecasters not feel the need to think about technical progress? As a general point, I think that (many) economists should pay much more attention both to the institutions in which innovation takes place (for example, see my critique of Robert Gordon’s book) and to the particular features of the defining technologies of the moment (for example, the post-2004 slowdown in the rate of growth of computer power).

The specific argument here is that the steadiness of the productivity growth trend before the crisis justified the assumption that this trend would be resumed. But this assumption only holds if there was no reason to think anything fundamental in the economy had changed. It should have been clear, though, that the UK economy had indeed changed in the years running up to 2007, and that these changes were in a direction that should have at least raised questions about the sustainability of the pre-crisis productivity trend.

These changes in the economy – summed up as a move to greater financialisation – were what caused the crisis in the first place. But, together with broader ideological shifts connected with the turn to market liberalism, they also undermined the capacity of the economy to innovate.

Our current productivity stagnation undoubtedly has more than one cause. Simon Wren-Lewis, in his discussion of the problem, has focused on the effect of bad macroeconomic policy. It seems entirely plausible that bad policy has made the short-term hit to growth worse than it needed to be. But a decade on from the crisis, we’re not looking at a short-term hit anymore – stagnation is the new normal. My 2016 paper “Innovation, research and the UK’s productivity crisis” discusses in detail the deeper causes of the problem.

One important aspect is the declining research and development intensity of the UK economy. The R&D intensity of the UK economy fell from more than 2% in the early 80’s to a low point of 1.55% in 2004. This was at a time when other countries – particularly the fast-developing countries of the far-east – were significantly increasing their R&D intensities. The decline was particularly striking in business R&D and the applied research carried out in government laboratories; for details of the decline see my own 2013 paper “The UK’s innovation deficit and how to repair it”.

What should have made this change particularly obvious is that it was, at least in part, the result of conscious policy. The historian of science Jon Agar wrote about Margaret Thatcher’s science policy in a recent article
“The curious history of curiosity driven research”. Thatcher and her advisors believed that the government should not be in the business of funding near-market research, and that if the state stepped back from these activities, private industry would step up and fill the gap: “The critical point was that Guise [Thatcher’s science policy advisor] and Thatcher regarded state intervention as deeply undesirable, and this included public funding for near-market research. The ideological desire to remove the state’s role from funding much applied research was the obverse of the new enthusiasm for ‘curiosity-driven research’.”

But stepping back from applied research by the state coincided with a new emphasis on “shareholder value” in public companies, which led industry to cut back on long-term investments with uncertain returns, such as R&D.

Much of this outcome was predictable in economic theory, which predicts that private sector actors will underinvest in R&D due to their inability to capture all of its benefits. Economists’ understanding of innovation and technological change is not yet good enough to quantify the effects of these developments. But, given that, as a result of policy changes, the UK had dismantled a good part of its infrastructure for innovation, a permanent decrease in its potential for productivity growth should not have been entirely unexpected.

The Office of Budgetary Responsibility’s Chart of Despond. From the press conference slides for the October 2017 Forecast Evaluation Report.

The second coming of industrial strategy

A month or so ago I was asked to do the after-dinner speech at the annual plenary meeting of the advisory bodies for the EPSRC (the UK’s government funding body for engineering and the physical sciences). My brief was to discuss what opportunities and pitfalls there might be for the UK Engineering and Physical Sciences community from the new prominence of industrial strategy in UK political discourse, and especially the regional economic growth agenda. Following some requests, here’s the text of my speech.

Thanks for asking me to talk a little bit about industrial strategy and the role of Universities in driving regional economic growth.

Let me start by talking about industrial strategy. This is an important part of the wider political landscape we’re dealing with at the moment, so it is worth giving it some thought.

If there’s a single signal of why it matters to us now, it’s the Industrial Strategy Challenge Fund – in last years Autumn Statement – a very welcome and quite substantial increase in the science budget – but tied in a very explicit way to industrial strategy.

What is that industrial strategy to which it is tied? We don’t know yet. We had a Green Paper in February – (a “very green” green paper, it was described as, which is civil service speak for being a bit half-baked). And we’re expecting a White Paper in “autumn” this year. i.e. before Christmas. I’ll come back to what I think it should be in a moment, but first… Continue reading “The second coming of industrial strategy”

Towards a coherent industrial strategy for the UK

What should a modern industrial strategy for the UK look like? This week the Industrial Strategy Commission, of which I’m a member, published its interim report – Laying the Foundations – which sets out some positive principles which we suggest could form the basis for an Industrial Strategy. This follows the government’s own Green Paper, Building our Industrial Strategy, to which we made a formal response here. I made some personal comments of my own here. The government is expected to publish its formal policy on Industrial Strategy, in a White Paper, in the autumn.

There’s a summary of our report on the website, and my colleague and co-author Diane Coyle has blogged about it here. Here’s my own perspective on the most important points.

Weaknesses of the UK’s economy

The starting point must be a recognition of the multiple and persistent weaknesses of the UK economy, which go back to the financial crisis and beyond. We still hear politicians and commentators asserting that the economy is fundamentally strong, in defiance both of the statistical evidence and the obvious political consequences we’ve seen unfolding over the last year or two. Now we need to face reality.

The UK’s economy has three key weaknesses. Its productivity performance is poor; there’s a big gap between the UK and competitor economies, and since the financial crisis productivity growth has been stagnant. This poor productivity performance translates directly into stagnant wage growth and a persistent government fiscal deficit.

There are very large disparities in economic performance across the country; the core cities outside London, rather than being drivers of economic growth, are (with the exception of Bristol and Aberdeen) below the UK average in GVA per head. De-industrialised regions and rural and coastal peripheries are doing even worse. The UK can’t achieve its potential if large parts of it are held back from fully contributing to economic growth.

The international trading position of the country is weak, with large and persistent deficits in the current account. BREXIT threatens big changes to our trading relationships, so this is not a good place to be starting from.

Inadequacy of previous policy responses

The obvious corollary of the UK’s economic weakness has to be a realisation that whatever we’ve been doing up to now, it hasn’t been working. This isn’t to say that the UK hasn’t had policies for industry and economic growth – it has, and some of them have been good ones. But a collection of policies doesn’t amount to a strategy, and the results tell us that even the good policies haven’t been executed at a scale that makes a material difference to the problems we’ve faced.

A strategy should begin with a widely shared vision

A strategy needs to start with a vision of where the country is going, around which a sense of national purpose can be build. How is the country going to make a living, how is it going to meet the challenges it’s facing? This needs to be clearly articulated and a consensus built that will last longer than one political cycle. It needs to be founded on a realistic understanding of the UK’s place in the world, and of the wider technological changes that are unfolding globally.

Big problems that need to be solved

We suggest six big problems that an industrial strategy should be built around.

  • Decarbonisation of the energy economy whilst maintaining affordability and security of the energy supply.
  • Ensuring adequate investment in infrastructure to meet current and future needs and priorities.
  • Developing a sustainable health and social care system.
  • Unlocking long-term investment – and creating a stable environment for long-term investments.
  • Supporting established and emerging high-value industries – and building export capacity in a changing trading environment.
  • Enabling growth in parts of the UK outside London and the South East in order to increase the UK’s overall productivity and growth.
  • Industrial strategy should be about getting the public and private sectors to work together in a way that simultaneously achieves these goals and creates economic value and growing productivity

    Some policy areas to focus on

    The report highlights a number of areas in which current approaches fail. Here are a few:

  • our government institutions don’t work well enough; they are too centralised in London, and yet departments and agencies don’t cooperate enough with each other in support of bigger goals,
  • the approach government takes to cost-benefit analysis is essentially incremental; it doesn’t account for or aspire to transformative change, which means that in automatically concentrates resources in areas that are already successful,
  • our science and innovation policy doesn’t look widely enough at the whole innovation landscape, including translational research and private sector R&D, and the distribution of R&D capacity across the country,
  • our skills policy has been an extreme example of a more general problem of policy churn, with a continuous stream of new initiatives being introduced before existing policies have had a chance to prove their worth or otherwise.
  • The Industrial Strategy Commission

    The Industrial Strategy Commission is a joint initiative of the Sheffield Political Economy Research Institute and the University of Manchester’s Policy@Manchester unit. My colleagues on the commission are the economist Diane Coyle, the political scientist Craig Berry, policy expert Andy Westwood, and we’re chaired by Dame Kate Barker, a very distinguished business economist and former member of the Bank of England’s powerful Monetary Policy Committee. We benefit from very able research support from Tom Hunt and Marianne Sensier.

    How Sheffield became Steel City: what local history can teach us about innovation

    As someone interested in the history of innovation, I take great pleasure in seeing the many tangible reminders of the industrial revolution that are to be found where I live and work, in North Derbyshire and Sheffield. I get the impression that academics are sometimes a little snooty about local history, seeing it as the domain of amateurs and enthusiasts. If so, this would be a pity, because a deeper understanding of the histories of particular places could be helpful in providing some tests of, and illustrations for, the grand theories that are the currency of academics. I’ve recently read the late David Hey’s excellent “History of Sheffield”, and this prompted these reflections on what we can learn about the history of innovation from the example of this city, which became so famous for its steel industries. What can we learn from the rise (and fall) of steel in Sheffield?

    Specialisation

    “Ther was no man, for peril, dorste hym touche.
    A Sheffeld thwitel baar he in his hose.”

    The Reeves Tale, Canterbury Tales, Chaucer.

    When the Londoner Geoffrey Chaucer wrote these words, in the late 14th century, the reputation of Sheffield as a place that knives came from (Thwitel = whittle: a knife) was already established. As early as 1379, 25% of the population of Sheffield were listed as metal-workers. This was a degree of focus that was early, and well developed, but not completely exceptional – the development of medieval urban economies in response to widening patterns of trade was already leading to specialisation based on the particular advantages location or natural resources gave them[1]. Towns like Halifax and Salisbury (and many others) were developing clusters in textiles, while other towns found narrower niches, like Burton-on-Trent’s twin trades of religious statuary and beer. Burton’s seemingly odd combination arose from the local deposits of gypsum [2]; what was behind Sheffield’s choice of blades?

    I don’t think the answer to this question is at all obvious. Continue reading “How Sheffield became Steel City: what local history can teach us about innovation”

    What hope against dementia?

    An essay review of Kathleen Taylor’s book “The Fragile Brain: the strange, hopeful science of dementia”, published by OUP.

    I am 56 years old; the average UK male of that age can expect to live to 82, at current levels of life expectancy. This, to me, seems good news. What’s less good, though, is that if I do reach that age, there’s about a 10% chance that I will be suffering from dementia, if the current prevalence of that disease persists. If I were a woman, at my age I could expect to live to nearly 85; the three extra years come at a cost, though. At 85, the chance of a woman suffering from dementia is around 20%, according to the data in Alzheimers Society Dementia UK report. Of course, for many people of my age, dementia isn’t a focus for their own future anxieties, it’s a pressing everyday reality as they look after their own parents or elderly relatives, if they are among the 850,000 people who currently suffer from dementia. I give thanks that I have been spared this myself, but it doesn’t take much imagination to see how distressing this devastating and incurable condition must be, both for the sufferers, and for their relatives and carers. Dementia is surely one of the most pressing issues of our time, so Kathleen Taylor’s impressive overview of the subject is timely and welcome.

    There is currently no cure for the most common forms of dementia – such as Alzheimer’s disease – and in some ways the prospect of a cure seems further away now than it did a decade ago. The number of drugs which have been demonstrated to work to cure or slow down Alzheimer’s disease remains at zero, despite billions of dollars having been spent in research and drug trials, and it’s arguable that we understand less now about the fundamental causes of these diseases, than we thought we did a decade ago. If the prevalence of dementia remains unchanged, by 2051, the number of dementia sufferers in the UK will have increased to 2 million.

    This increase is the dark side of the otherwise positive story of improving longevity, because the prevalence of dementia increases roughly exponentially with age. To return to my own prospects as a 56 year old male living in the UK, one can make another estimate of my remaining lifespan, adding the assumption that the increases in longevity we’ve seen recently continue. On the high longevity estimates of the Office of National Statistics, an average 56 year old man could expect to live to 88 – but at that age, there would be a 15% chance of suffering from dementia. For woman, the prediction is even better for longevity – and worse for dementia – with an life expectancy of 91, but a 20% chance of dementia (there is a significantly higher prevalence of dementia for women than men at a given age, as well as systematically higher life expectancy). To look even further into the future, a girl turning 16 today can expect to live to more than 100 in this high longevity scenario – but that brings her chances of suffering dementia towards 50/50.

    What hope is there for changing this situation, and finding a cure for these diseases? Dementias are neurodegenerative diseases; as they take hold, nerve cells become dysfunctional and then die off completely. They have different effects, depending on which part of the brain and nervous system is primarily affected. The most common is Alzheimer’s disease, which accounts for more than half of dementias in the over 65’s, and begins by affecting the memory, and then progresses to a more general loss of cognitive ability. In Alzheimer’s, it is the parts of the the brain cortex that deal with memory that atrophy, while in frontotemporal dementia it is the frontal lobe and/or the temporal cortex that are affected, resulting in personality changes and loss of language. In motor neurone diseases (of which the most common is ALS, amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease), it is the nerves in the brainstem and spinal cord that control the voluntary movement of muscles that are affected, leading to paralysis, breathing difficulties, and loss of speech. The mechanisms underlying the different dementias and other neurodegenerative diseases differ in detail, but they have features in common and the demarcations between them aren’t always well defined.

    It’s not easy to get a grip on the science that underlies dementia – it encompasses genetics, cell and structural biology, immunology, epidemiology, and neuroscience in all its dimensions. Taylor’s book gives an outstanding and up-to-date overview of all these aspects. It’s clearly written, but it doesn’t shy away from the complexity of the subject, which makes it not always easy going. The book concentrates on Alzheimer’s disease, taking that story from the eponymous doctor who first identified the disease in 1901.

    Dr Alois Alzheimer identified the brain pathology characteristic of Alzheimer’s disease – including the characteristic “amyloid plaques”. These consist of strongly associated, highly insoluble aggregates of protein molecules; subsequent work has identified both the protein involved and the structure it forms. The structure of amyloids – in which protein chains are bound together in sheets by strong hydrogen bonds – can be found in many different proteins, (I discussed this a while ago on this blog, in Death, Life and Amyloids) and when these structures occur in biological systems they are usually associated with disease states. In Alzheimer’s, the particular protein involved is called Aβ; this is a fragment of a larger protein of unknown function called APP (for amyloid precursor protein). Genetic studies have shown that mutations that involve the genes coding for APP and for the enzymes that snip the Aβ off the end of the APP, lead to more production of Aβ, more amyloid formation, and are associated with increased susceptibility to Alzheimer’s disease. The story seems straightforward, then – more Aβ leads to more amyloid, and the resulting build-up of insoluble crud in the brain leads to Alzheimer’s disease. This is the “amyloid hypothesis”, in its simplest form.

    But things are not quite so simple. Although the genetic evidence linking Aβ to Alzheimer’s is strong, there are doubts about the mechanism. It turns out that the link between the presence of amyloid plaques themselves and the disease symptoms isn’t as strong as one might expect, so attention has turned to the possibility that it is the precursors to the full amyloid structure, where a handful of Aβ molecules come together to make smaller units – oligomers – which are the neurotoxic agents. Yet the mechanism by which these oligomers might damage the nerve cells remains uncertain.

    Nonetheless, the amyloid hypothesis has driven a huge amount of scientific effort, and it has motivated the development of a number of potential drugs, which aim to interfere in various ways with the processes by which Aβ has formed. These drugs have, so far without exception, failed to work. Between 2002 and 2012 there were 413 trials of drugs for Alzheimer’s; the failure rate was 99.6%. The single successful new drug – memantine – is a cognitive enhancer which can relieve some symptoms of Alzheimer’s, without modifying the cause of the disease. This represents a colossal investment of money – to be measured at least in the tens of billions of dollars – for no return so far.

    In November last year, Eli Lilly announced that its anti Alzheimer’s antibody, solanezumab, which was designed to bind to Aβ, failed to show a significant effect in phase 3 trials. After the failure of another phase III trial this February, of Merck’s beta-secretase inhibitor verubecestat, designed to suppress the production of Aβ, the medicinal chemist and long-time commentator on the pharmaceutical industry Derek Lowe wrote: “Beta-secretase inhibitors have failed in the clinic. Gamma-secretase inhibitors have failed in the clinic. Anti-amyloid antibodies have failed in the clinic. Everything has failed in the clinic. You can make excuses and find reasons – wrong patients, wrong compound, wrong pharmacokinetics, wrong dose, but after a while, you wonder if perhaps there might not be something a bit off with our understanding of the disease.”

    What is perhaps even more worrying is that the supply of drug candidates currently going through the earlier stages of the processes, phase 1 and phase 2 trials, looks like it is starting to dry up. A 2016 review of the Alzheimer’s drug pipeline concludes that there are simply not enough drugs in phase 1 trials to give hope that new treatments are coming through in enough numbers to survive the massive attrition rate we’ve seen in Alzheimer’s drug candidates (for a drug to get to market by 2025, it would need to be in phase 1 trials now). One has to worry that we’re running out of ideas.

    One way we can get a handle on the disease is to step back from the molecular mechanisms, and look again at the epidemiology. It’s clear that there are some well-defined risk factors for Alzheimer’s, which point towards some of the other things that might be going on, and suggest practical steps by which we can reduce the risks of dementia. One of these risk factors is type 2 diabetes, which according to data quoted by Taylor, increases the risk of dementia by 47%. Another is the presence of heart and vascular disease. The exact mechanisms at work here are uncertain, but on general principles these risk factors are not surprising. The human brain is a colossally energy-intensive organ, and anything that compromises the delivery of glucose and oxygen to its cells will place them under stress.

    One other risk factor that Taylor does not discuss much is air pollution. There is growing evidence (summarised, for example, in a recent article in Science magazine) that poor air quality – especially the sub-micron particles produced in the exhausts of diesel engines – is implicated in Alzheimer’s disease. It’s been known for a while that environmental nanoparticles such as the ultra-fine particulates formed in combustion can lead to oxidative stress, inflammation and thus cardiovascular disease (I wrote about this here more than ten years ago – Ken Donaldson on nanoparticle toxicology). The relationship between pollution and cardiovascular disease would by itself indicate an indirect link to dementia, but there is in addition the possibility of a more direct link, if, as seems possible, some of these ultra fine particles can enter the brain directly.

    There’s a fairly clear prescription, then, for individuals who wish to lower their risk of suffering from dementia in later life. They should eat well, keep their bodies and minds well exercised, and as much as possible breathe clean air. Since these are all beneficial for health in many other ways, it’s advice that’s worth taking, even if the links with dementia turn out to be less robust than they seem now.

    But I think we should be cautious about putting the emphasis entirely on individuals taking responsibility for these actions to improve their own lifestyles. Public health measures and sensible regulation has a huge role to play, and are likely to be very cost-effective ways of reducing what otherwise will be a very expensive burden of disease. It’s not easy to eat well, especially if you’re poor; the food industry needs to take more responsibility for the products it sells. And urban pollution can be controlled by the kind of regulation that leads to innovation – I’m increasingly convinced that the driving force for accelerating the uptake of electric vehicles is going to be pressure from cities like London and Paris, Los Angeles and Beijing, as the health and economic costs of poor air quality become harder and harder to ignore.

    Public health interventions and lifestyle improvements do hold out the hope of lowering the projected numbers of dementia sufferers from that figure of 2 million by 2051. But, for those who are diagnosed with dementia, we have to hope for the discovery of a breakthrough in treatment, a drug that does successfully slow or stop the progression of the disease. What needs to be done to bring that breakthrough closer?

    Firstly, we should stop overstating the progress we’re making now, and stop hyping “breakthroughs” that really are nothing of the sort. The UK’s newspapers seem to be particularly guilty of doing this. Take, for example, this report from the Daily Telegraph, headlined “Breakthrough as scientists create first drug to halt Alzheimer’s disease”. Contrast that with the reporting in the New York Times of the very same result – “Alzheimer’s Drug LMTX Falters in Final Stage of Trials”. Newspapers shouldn’t be in the business of peddling false hope.

    Another type of misguided optimism comes from Silicon Valley’s conviction that all is required to conquer death is a robust engineering “can-do” attitude. “Aubrey de Grey likes to compare the body to a car: a mechanic can fix an engine without necessarily understanding the physics of combustion”, a recent article on Silicon Valley’s quest to live for ever comments about the founder of the Valley’s SENS Foundation (the acronym is for Strategies for Engineered Negligible Senescence). Removing intercellular junk – amyloids – is point 6 in the SENS Foundation’s 7 point plan for eternal life.

    But the lesson of several hundred failed drug trials is that we do need to understand the science of dementia more before we can be confident of treating it. “More research is needed” is about the lamest and most predictable thing a scientist can ever say, but in this case it is all too true. Where should our priorities lie?

    It seems to me that hubristic mega-projects to simulate the human brain aren’t going to help at all here – they consider the brain at too high a level of abstraction to help disentangle the complex combination of molecular events that is disabling and killing nerve cells. We need to take into account the full complexity of the biological environments that nerve cells live in, surrounded and supported by glial cells like astrocytes, whose importance may have been underrated in the past. The new genomic approaches have already yielded powerful insights, and techniques for imaging the brain in living patients – magnetic resonance imaging and positron emission tomography – are improving all the time. We should certainly sustain the hope that new science will unlock new treatments for these terrible diseases, but we need to do the hard and expensive work to develop that science.

    In my own university, the Sheffield Institute for Translational Neuroscience focuses on motor neurone disease/ALS and other neurodegenerative diseases, under the leadership of an outstanding clinician scientist, Professor Dame Pam Shaw. The University, together with Sheffield’s hospital, is currently raising money for a combined MRI/PET scanner to support this and other medical research work. I’m taking part in one fundraising event in a couple of months with many other university staff – attempting to walk 50 miles in less than 24 hours. You can support me in this through this JustGiving page.

    Some books I read this year

    Nick Lane – The Vital Question: energy, evolution and the origins of complex life

    This is as good as everyone says it is – well written and compelling. I particularly appreciated the focus on energy flows as the driver for life, and the way the book gives the remarkable chemiosmotic hypothesis the prominence it deserves. The hypothesis Lane presents for the way life might have originated on earth is concrete and (to me) plausible, and what’s more important it suggests some experimental tests.

    Todd Feinberg and Jon Mallet – The Ancient Origins of Consciousness: how the brain created experience

    How many organisms can be said to be conscious, and when did consciousness emerge? Feinberg and Mallet’s answers are bold: all vertebrates are conscious, and in all probability so are cephalopods and some arthropods. In their view, consciousness evolved in the Cambrian explosion, associated with an arms race between predators and prey, and driven by the need to integrate different forms of long-distance sensory perceptions to produce a model of an organism’s environment. Even if you don’t accept the conclusion, you’ll learn a great deal about the evolution of nervous systems and the way sense perceptions are organised in many different kinds of organisms.

    David Mackay – Information theory, inference, and learning algorithms

    This is a text-book, so not particularly easy reading, but it’s a particularly rich and individual one. Continue reading “Some books I read this year”

    Physical limits and diminishing returns of innovation

    Are ideas getting harder to find? This question is asked in a preprint with this title by economists Bloom, Jones, Van Reenan and Webb, who attempt to quantify decreasing research productivity, showing for a number of fields that it is currently taking more researchers to achieve the same rate of progress. The paper is discussed in blogposts by Diane Coyle, who notes sceptically that the same thing was being said in 1983, and by Alex Tabarrok, who is more depressed.

    Given the slowdown in productivity growth in the developed nations, which has steadily fallen from about 2.9% a year in 1970 to about 1.2% a year now, the notion is certainly plausible. But the attempt in the paper to quantify the decline is, I think, so crude as to be pretty much worthless – except inasmuch as it demonstrates how much growth economists need to understand the nature of technological innovation at a higher level of detail and particularity than is reflected in their current model-building efforts.

    The first example is the familiar one of Moore’s law in semiconductors, where over many decades we have seen exponential growth in the number of transistors on an integrated circuit. The authors argue that to achieve this, the total number of researchers has increased by a factor of 25 or so since 1970 (this estimate is obtained by dividing the R&D expenditure of the major semiconductor companies by an average researcher wage). This is very broadly consistent with a corollary of Moore’s law (sometimes called Rock’s Law). This states that the capital cost of new generations of semiconductor fabs is also growing exponentially, with a four year doubling time; this cost is now in excess of $10 billion. A large part of this is actually the capitalised cost of the R&D that goes into developing the new tools and plant for each generation of ICs.

    This increasing expense simply reflects the increasing difficulty of creating intricate, accurate and reproducible structures on ever-decreasing length scales. The problem isn’t that ideas are harder to find, it’s that as these length scales approach the atomic, many more problems arise, which need more effort to solve them. It’s the fundamental difficulty of the physics which leads to diminishing returns, and at some point a combination of the physical barriers and the economics will first slow and then stop further progress in miniaturising electronics using this technology.

    For the second example, it isn’t so much physical barriers as biological ones that lead to diminishing returns, but the effect is the same. The green revolution – a period of big increases in the yields of key crops like wheat and maize – was driven by creating varieties able to use large amounts of artificial fertiliser and focus much more of their growing energies into the useful parts of the plant. Modern wheat, for example, has very short stems – but there’s a limit to how short you can make them, and that limit has probably been reached now. So R&D efforts are likely to be focused in other areas than pure yield increases – in disease resistance and tolerance of poorer growing conditions (the latter likely to be more important as climate changes, of course).

    For their third example, the economists focus on medical progress. I’ve written before about the difficulties of the pharmaceutical industry, which has its own exponential law of progress. Unfortunately this goes the wrong way, with cost of developing new drugs increasing exponentially with time. The authors focus on cancer, and try to quantify declining returns by correlating research effort, as measured by papers published, with improvements in the five year cancer survival rate.

    Again, I think the basic notion of diminishing returns is plausible, but this attempt to quantify it makes no sense at all. One obvious problem is that there are very long and variable lag times between when research is done, through the time it takes to test drugs and get them approved, to when they are in wide clinical use. To give one example, the ovarian cancer drug Lynparza was approved in December 2014, so it is conceivable that its effects might start to show up in 5 year survival rates some time after 2020. But the research it was based on was published in 2005. So the hope that there is any kind of simple “production function” that links an “input” of researchers’ time with an “output” of improved health, (or faster computers, or increased productivity) is a non-starter*.

    The heart of the paper is the argument that an increasing number or researchers are producing fewer “ideas”. But what can they mean by “ideas”? As we all know, there are good ideas and bad ideas, profound ideas and trivial ideas, ideas that really do change the world, and ideas that make no difference to anyone. The “representative idea” assumed by the economists really isn’t helpful here, and rather than clarifying their concept in the first place, they redefine it to fit their equation, stating, with some circularity, that “ideas are proportional improvements in productivity”.

    Most importantly, the value of an idea depends on the wider technological context in which it is developed. People claim that Leonardo da Vinci invented the helicopter, but even if he’d drawn an accurate blueprint of a Chinook, it would have had no value without all the supporting scientific understanding and technological innovations that were needed to make building a helicopter a practical proposition.

    Clearly, at any given time there will be many ideas. Most of these will be unfruitful, but every now and again a combination of ideas will come together with a pre-existing technical infrastructure and a market demand to make a viable technology. For example, integrated circuits emerged in the 1960’s, when developments in materials science and manufacturing technology (especially photolithography and the planar process) made it possible to realise monolithic electronic circuits. Driven by customers with deep pockets and demanding requirements – the US defense industry – many refinements and innovations led to the first microprocessor in 1971.

    Given a working technology and a strong market demand to create better versions of that technology, we can expect a period of incremental improvement, often very rapid. A constant rate of fractional improvement leads, of course, to exponential growth in quality, and that’s what we’ve seen over many decades for integrated circuits, giving us Moore’s law. The regularity of this improvement shouldn’t make us think it is automatic, though – it represents many brilliant innovations. Here, though, these innovations are coordinated and orchestrated so that in combination the overall rate of innovation is maintained. In a sense, the rate of innovation is set by the market, and the resources devoted to innovation increased to maintain that rate.

    But exponential growth can never be sustained in a physical (or biological) system – some limit must always be reached. From about 1750 to 1850, the efficiency of steam engines increased exponentially, but despite many further technical improvements, this rate of progress slowed down in the second half of the 19th century – the second law of thermodynamics, through Carnot’s law, puts a fundamental upper limit on efficiency and as that limit is approached, diminishing returns set in. Likewise, the atomic scale of matter puts fundamental limits on how far the CMOS technology of our current integrated circuits can be pushed to smaller and smaller dimensions, and as those limits are approached, we expect to see the same sort of diminishing returns.

    Economic growth didn’t come to an end in 1850 when the exponential rise in steam engine efficiencies started to level out, though. Entirely new technologies were developed – electricity, the chemical industry, the internal combustion engine powered motor car – which went through the same cycle of incremental improvement and eventual saturation.

    The question we should be asking now is not whether the technologies that have driven economic growth in recent years have reached the point of diminishing returns – if they have, that is entirely natural and to be expected. It is whether enough entirely new technologies are now entering infancy, from which they can take-off with the sustained incremental growth that’s driven the economy in previous technology waves. Perhaps solar energy is in that state now; quantum computing perhaps hasn’t got there yet, as it isn’t clear how the basic idea can be implemented and whether there is a market to drive it.

    What we do know is that growth is slowing, and has been doing so for some years. To this extent, this paper highlights a real problem. But a correct diagnosis of the ailment and design of useful policy prescriptions is going to demand a much more realistic understanding of how innovation works.

    * if one insists on trying to build a model, the “production function” would need to be, not a simple function, but a functional, integrating functions representing different types of research and development effort over long periods of time.