Climate change: what do we know for sure, and what is less certain?

In another post inspired by my current first year physics course, The Physics of Sustainable Energy (PHY123), I suggest how a physicist might think about climate change.

The question of climate change is going up the political agenda again; in the UK recent floods have once again raised the question of whether recent extreme weather can be directly attributed to human-created climate change, or whether such events are likely to be more frequent in the future as a result of continuing human induced global warming. One UK Energy Minister – Michael Fallon – described the climate change argument as “theology” in this interview. Of course, theology is exactly what it’s not. It’s science, based on theory, observation and modelling; some of the issues are very well understood, and some remain more uncertain. There’s an enormous amount of material in the 1536 pages of the IPCC’s 5th assessment report (available here). But how should we navigate these very complex arguments in a way which makes clear what we know for sure, and what remains uncertain? Here’s my suggestion for a route-map.

My last post talked about how, after 1750 or so, we became dependent on fossil fuels. Since that time we have collectively burned about 375 gigatonnes of carbon – what has the effect of burning all that carbon been on the environment? The straightforward answer to that is that there is now a lot more carbon dioxide in the atmosphere than there was in pre-industrial times. For the thousand years before the industrial revolution, the carbon dioxide content of the atmosphere was roughly constant at around 280 parts per million. Since the 19th century it has been significantly increasing; it’s currently just a couple of ppm short of 400, and is still increasing by about 2 ppm per year.

This 40% increase in carbon dioxide concentration is not in doubt. But how can we be sure it’s associated with burning fossil fuels? Continue reading “Climate change: what do we know for sure, and what is less certain?”

The UK’s innovation deficit and how to repair it

I’ve written a working paper about the long-term decline in the research and development intensity of the UK’s economy, which has just been published on the website of the Sheffield Political Economy Research Institute here. It brings together many of the themes I’ve been writing about on this blog in the last few years. Here is its introduction.

Technological innovation is one of the major sources of long-term economic growth in developed economies. Since 1945 countries like the UK have enjoyed a remarkable run of sustained growth and improvement of living standards, associated with the widespread uptake of new technologies – cars and aircraft, consumer goods, computers and communication devices, effective new medicines, all underpinned by the development of new materials, chemicals and electronics. Now the UK is undergoing its deepest and most persistent period of slow or no growth for more than a hundred years. Is there any connection between this growth crisis and innovation – or lack of it?

The UK is a much less research and development intensive economy than it was thirty years ago, and is less R&D intensive than most of its rivals; this R&D deficit is most prominent in applied research funded and carried out in the business sector, and in government funded strategic research. Innovation can and does happen without research and development as understood in its conventional sense; innovation through organisational change and novelty in marketing, often using existing technology in new ways, can make significant contributions to economic growth. But at the technological frontier the development of new products and processes requires targeted investment of people and resources, and it is the capacity to make such efforts that is lost as research and development capabilities are run down. This loss of innovative capacity is not an accident; it is a direct consequence of the changing nature of the UK’s political economy. In the private sector, a growing structural trend to short-termism driven by the excessive financialisation of the economy, and an emphasis on “unlocking shareholder value”, has led to an abandonment of more long-ranged applied research. The privatisation of sectors such as energy has brought these pressures for short-termism into areas previously thought of as of strategic importance for the state. Together, these factors have led to the systematic liquidation of a significant part of the national infrastructure – both public and private – for applied and mission-oriented research.

Research and development are global activities; the benefits of new technologies developed in one part of the world diffuse across national boundaries, so R&D needs to be considered in a global as well as a national context. The declining R&D intensity of the UK displays in the most acute form a wider problem –
highly financialised market-centred capitalism, while it is it is good at delivering some types of incremental, consumer focused innovation, doesn’t favour more radical innovation which requires larger investments over longer time horizons. We currently are seeing serious global slowdowns in innovation in the pharmaceutical sector and in energy sectors. The former is a particular problem for the UK, because has a strong specialization in the pharmaceutical sector. The slowdown in energy innovation is a problem for everybody on the planet.

The example of energy illustrates why the development of new technology is so important. We depend existentially on technology, to deliver the cheap and abundant energy that our economies depend on, for example. But the technology we have isn’t good enough; the cost of extracting fossil fuels from the earth rises as the most accessible reserves are exhausted, and the consequences for the stability of the earth’s climate of burning fossil fuels become ever more apparent. We need better technologies not just to ensure the continuously rising living standards we’ve come to expect, but also because if we don’t replace our currently unsustainable technologies with better ones living standards will fall.

We should not be fatalistic about a slowing down of innovation in crucial technology areas, either nationally or globally. The slowing down of innovation isn’t a consequence of some unalterable law of nature, nor is it because we have already “taken the low-hanging fruit”. Innovation is slowing down because we have collectively chosen to devote fewer resources to developing it. We need as a society to recognize the problem, recognize that current policy for innovation isn’t delivering, and take responsibility for changing the current situation.

The rest of the paper can be downloaded here.

The UK’s nuclear new build: too expensive, too late

Seven years after a change in UK energy policy called for a new generation of nuclear power stations to be built, today’s announcement of a deal with the French energy company EDF to build two nuclear power plants at Hinckley point marks a long overdue step forward. But the deal is a spectacularly bad one for the UK. It locks us into high energy prices for a generation, it yields an unacceptable degree of control over a strategic asset to a foreign government, it risks sacrificing the opportunity nuclear new build might have given us to rebuild our industrial base, and it will cost us tens of billions of pounds more than necessary. It’s all to preserve political appearances, to allow the government to appear to be abiding by its unwisely made commitments.

The UK is committed to privatised energy markets, no subsidies for nuclear power, and is unwilling to issue new government debt to pay for infrastructure. An opposition to state involvement in energy seems to apply only to the UK state, though, as this deal demonstrates. EDF is majority owned by the French Government, while the Chinese nuclear companies China General Nuclear and China National Nuclear Corporation, who will be co-investing in the project, are wholly owned and controlled by the Chinese government. The price of this investment (as reported by the FT’s Nick Butler) is some as yet unspecified degree of operational involvement. It seems extraordinary that the government is prepared to allow such a degree of involvement in a strategic asset by the agents of a foreign state.

The deal will not, it’s true, be directly subsidised by the UK government (except, and not insignificantly, for an implicit subsidy in the form of a disaster insurance guarantee). Instead future electricity consumers will pay the subsidy, in the form of a price guarantee set at around twice the current wholesale price of electricity, to rise with inflation over 35 years.

The quoted price for two European Pressurised Water Reactors of 1.6 GWe capacity is £16 billion. The first of this reactor design to be built, at Olkiluoto in Finland, started out with a price of €3 billion, but after delays and overruns the current estimate is €8.5 billion. So the quoted price for two of £16 billion – €9.45 billion – bakes in this cost overrun and adds a little bit more for luck. How much of this £16 billion will come back to the UK in the form of jobs and work for UK industry? It is difficult to say, because no commitments seem to have been made that a certain fraction of work should come to the UK. Given the fact that the UK government isn’t paying for the reactors, it doesn’t have a lot of leverage on this.

How bad a deal is this in monetary terms? The strike price is £92.50 per MWh, falling to £89.50 if EDF goes ahead with another pair of reactors at Sizewell, fully inflation indexed to the consumer price index. A recent OECD report (PDF) gives some idea of costs; for reactors of this type operating in France it estimates fuel cycle costs as $9.33 per MWh, operations and maintenance at $16 per MWh, with $0.05 per MWh needed to be set aside to cover the final costs of decommissioning. Taking these together this comes to a little less than £16 per MWh. This leaves £76.50 per MWh to cover the cost of capital of the £16 billion it takes to build it. Assuming EDF manage to run their 3.2 MW of capacity at a 90% load factor, this gives them and their investors £1.9 billion a year, or a total return of £67 billion, fully protected against inflation, for their £16 billion investment.

How much would it cost if the UK government itself decided that it should invest in the plant? The UK government can currently borrow money for 30-40 years at 3.5%. The fully amortised loan for £16 billion over 35 years would cost £28 billion. Unlike the deal agreed with EDF and the Chinese, these borrowing costs would not rise with inflation. Even without accounting for inflation, the UK Government’s ideological opposition to borrowing money to pay for infrastructure carries a price tag of around £40 billion, that will have to be paid by UK industry and consumers over the next 35 years.

I do think we need a new generation of nuclear power stations in the UK, but this model for achieving that seems unsustainable. It’s time for a complete rethink. For more background on why we are where we are, see my last post, Moving beyond nuclear power’s troubled history.

Update at 8.40am 21/10: the Energy Minister, Ed Davey, said on Radio 4 this morning that there was a commitment for 57% of the value of the deal to be spent with UK firms. This isn’t mentioned in the press release.

Update 2, 22/20: The CEO of EDF was reported yesterday as saying that 57% involvement of UK firms wasn’t a commitment, but an upper limit. So I think my original comments stand.

Decelerating change in the pharmaceutical industry

Medical progress will have come to a complete halt by the year 2329. I reach this anti-Kurzweilian conclusion from a 2012 paper – Diagnosing the decline in pharmaceutical R&D efficiency – which demonstrates that, far from showing an accelerating rate of innovation, the pharmaceutical industry has for the last 60 years been seeing exponentially diminishing returns on its research and development effort. At the date of the anti-singularity, the cost of developing a single new drug will have exceeded the world’s total economic output. The extrapolation is ludicrous, of course, but the problem is not. By 2010 it took an average of $2.17 billion in R&D spending to introduce a single new drug, including the cost of all the failures. This cost per new drug has been following a kind of reverse Moore’s law, increasing exponentially in real terms at a rate of 7.6% a year since 1950, corresponding to a doubling time of a bit more than 9 years (see this plot from the paper cited above). This trend is puzzling – our knowledge of life sciences has been revolutionised during this period, while the opportunities provided by robotics and IT, allowing approaches like rapid throughput screening and large scale chemoinformatics, have been eagerly seized on by the industry. Despite all this new science and enabling technology, the anti-Moore’s law trend of diminishing R&D returns continues inexorably.

This should worry us. The failure to find effective therapies for widespread and devastating conditions – Alzheimer’s, to take just one example – leads to enormous human suffering. The escalating cost of developing new drugs is ultimately passed on to society through their pricing, leading to strains on national healthcare systems that will become more acute as populations age. As a second-order effect, scientists should be concerned in case the drying up of medical innovation casts doubt on some of the justifications for government spending on fundamental life sciences research. And, of course, a healthy and innovative pharmaceutical industry is itself important for economic growth, particularly here in the UK, where it remains the one truly internationally competitive high technology sector of the economy. So what can be done to speed up innovation in this vital sector? Continue reading “Decelerating change in the pharmaceutical industry”

We sold out our energy future

Everyone should know that the industrial society we live in depends on access to plentiful, convenient, cheap energy – the last two hundred years of rapid economic growth has been underpinned by the large scale use of fossil fuels. And everyone should know that the effect of burning those fossil fuels has been to markedly increase the carbon dioxide content of the atmosphere, resulting in a changing climate, with potentially dangerous but still uncertain consequences. But a transition from fossil fuels to low carbon sources of energy isn’t going to take place quickly; existing low carbon energy sources are expensive and difficult to scale up. So rather than pushing on with the politically difficult, slow and expensive business of deploying current low carbon energy sources, why don’t we wait until technology brings us a new generation of cheaper and more scalable low carbon energy? Presumably, one might think, since we’ve known about these issues for some time, we’ve been spending the last twenty years energetically doing research into new energy technologies?

Alas, no. As my graph shows, the decade from 1980 saw a worldwide decline in the fraction of GDP major industrial countries devoted to government funded energy research, development, and demonstration, with only Japan sustaining anything like its earlier intensity of energy research into the 1990s. It was only in the second half of the decade after 2000 that we began to see a recovery, though in the UK and the USA a rapid upturn following the 2007 financial crisis has fallen away again. A rapid post-2000 growth of energy RD&D in Korea is an exception to the general picture. There’s a good discussion of the situation in the USA in a paper by Kamman and Nemet – Reversing the incredible shrinking energy R&D budget. But the largest fall by far was in the UK, where at its low point, the fraction of national resource devoted to energy RD&D fell, in 2003, to an astonishing 0.2% of its value at the 1981 high point.

Government spending on energy research, development and demonstration
Government spending on energy research, development and demonstration. Data: International Energy Authority

Continue reading “We sold out our energy future”

When technologies can’t evolve

In what way, and on what basis, should we attempt to steer the development of technology? This is the fundamental question that underlies at least two discussions that I keep coming back to here – how to do industrial policy and how to democratise science. But some would simply deny the premise of these discussions, and argue that technology can’t be steered, and that the market is the only effective way of incorporating public preferences into decisions about technology development. This is a hugely influential point of view which goes with the grain of the currently hegemonic neo-liberal, free market dominated world-view. It originates in the arguments of Friedrich Hayek against the 1940’s vogue for scientific planning, it incorporates Michael Polanyi’s vision of an “independent republic of science”, and it fits the view of technology as an autonomous agent which unfolds with a logic akin to that of Darwinian evolution – what one might called the “Wired” view of the world, eloquently expressed in Kevin Kelly’s recent book “What Technology Wants”. It’s a coherent, even seductive, package of beliefs; although I think it’s fatally flawed, it deserves serious examination.

Hayek’s argument against planning (his 1945 article The Use of Knowledge in Society makes this very clearly) rests on two insights. Firstly, he insists that the relevant knowledge that would underpin the rational planning of an economy or a society isn’t limited to scientific knowledge, and must include the tacit, unorganised knowledge of people who aren’t experts in the conventional sense of the word. This kind of knowledge, then, can’t rest solely with experts, but must be dispersed throughout society. Secondly, he claims that the most effective – perhaps the only – way in which this distributed knowledge can be aggregated and used is through the mechanism of the market. If we apply this kind of thinking to the development of technology, we’re led to the idea that technological development would happen in the most effective way if we simply allow many creative entrepreneurs to try different ways of combining different technologies and to develop new ones on the basis of existing scientific knowledge and what developments of that knowledge they are able to make. When the resulting innovations are presented to the market, the ones that survive will, by definition, the ones that best meet human needs. Stated this way, the connection with Darwinian evolution is obvious.

One objection to this viewpoint is essentially moral in character. The market certainly aggregates the preferences and knowledge of many people, but it necessarily gives more weight to the views of people with more money, and the distribution of money doesn’t necessarily coincide with the distribution of wisdom or virtue. Some free market enthusiasts simply assert the contrary, following Ayn Rand. There are, though, some much less risible moral arguments in favour of free markets which emphasise the positive virtues of pluralism, and even those opponents of libertarianism who point to the naivety of believing that this pluralism can be maintained in the face of highly concentrated economic and political power need to answer important questions about how pluralism can be maintained in any alternative system.

What should be less contentious than these moral arguments is an examination of the recent history of technological innovation. This shows that the technologies that made the modem world – in all their positive and negative aspects – are largely the result of the exercise of state power, rather than of the free enterprise of technological entrepreneurs. New technologies were largely driven by large scale interventions by the Warfare States that dominated the twentieth century. The military-industrial complexes of these states began long before Eisenhower popularised this name, and existed not just in the USA, but in Wilhelmine and Nazi Germany, in the USSR, and in the UK (David Edgerton’s “Warfare State: Britain 1920- 1970” gives a compelling reinterpretation of modern British history in these terms). At the beginning of the century, for example, the Haber-Bosch process for fixing nitrogen was rapidly industrialised by the German chemical company BASF. It’s difficult to think of a more world-changing innovation – more than half the world’s population wouldn’t now be here if it hadn’t been for the huge growth in agricultural productivity that artificial fertilisers made possible. However, the importance of this process for producing the raw materials for explosives ensured that the German state took much more than a spectator’s role. Vaclav Smil, in his book Enriching the Earth, quotes an estimate for the development cost of the Haber-Bosch process of US$100 million at 1919 prices (roughly US$1 billion in current money, equating to about $19 billion in terms of its share of the economy at the time), of which about half came from the government. Many more recent examples of state involvement in innovation are cited in Mariana Mazzucato’s pamphlet The Entrepreneurial State. Perhaps one of the most important stories is the role of state spending in creating the modern IT industry; computing, the semiconductor industry and the internet are all largely the outcome of US military spending.

Of course, the historical fact that the transformative, general purpose technologies that were so important in driving economic growth in the twentieth century emerged as a result of state sponsorship doesn’t by itself invalidate the Hayekian thesis that innovation is best left to the free market. To understand the limitations of this picture, we need to return to Hayek’s basic arguments. Under what circumstances does the free market fail to aggregate information in an optimal way? People are not always rational economic actors – they know what they want and need now, but they aren’t always good at anticipating what they might want if things they can’t imagine become available, or what they might need if conditions change rapidly. There’s a natural cognitive bias to give more weight to the present, and less to an unknowable future. Just like natural selection, the optimisation process that the market carries out is necessarily local, not global.

So when does the Hayekian argument for leaving innovation to the market not apply? The free market works well for evolutionary innovation – local optimisation is good at solving present problems with the tools at hand now. But it fails to be able to mobilise resources on a large scale for big problems whose solution will take more than a few years. So, we’d expect market-driven innovation to fail to deliver whenever timescales for development are too long, or the expense of development too great. Because capital markets are now short-term to the point of irrationality (as demonstrated by this study (PDF) from the Bank of England by Andrew Haldane), the private sector rejects long term investments in infrastructure and R&D, even if the net present value of those investments would be significantly positive. In the energy sector, for example, we saw widespread liberalisation of markets across the world in the 1990s. One predictable consequence of this has been a collapse of private sector R&D in the energy sector (illustrated for the case of the USA by Dan Kammen here – The Incredible Shrinking Energy R&D Budget (PDF)).

The contrast is clear if we compare two different cases of innovation – the development of new apps for the iPhone, and the development of innovative new passenger aircraft, like the composite-based Boeing Dreamliner and Airbus A350. The world of app development is one in which tens or hundreds of thousands of people can and do try out all sorts of ideas, a few of which have turned out to fulfil an important and widely appreciated need and have made their developers rich. This is a world that’s well described by the Hayekian picture of experimentation and evolution – the low barriers to entry and the ease of widespread distribution of the products rewards experimentation. Making a new airliner, in contrast, involves years of development and outlays of tens of billions of dollars in development cost before any products are sold. Unsurprisingly, the only players are two huge companies – essentially a world duopoly – each of whom is in receipt of substantial state aid of one form or another. The lesson is that technological innovation doesn’t just come in one form. Some innovation – with low barriers to entry, often building on existing technological platforms – can be done by individuals or small companies, and can be understood well in terms of the Hayekian picture. But innovation on a larger scale, the more radical innovation that leads to new general purpose technologies, needs either a large company with a protected income stream or outright state action. In the past the companies able to carry out innovation on this scale would typically have been a state sponsored “national champion”, supported perhaps by guaranteed defense contracts, or the beneficiary of a monopoly or cartel, such as the postwar Bell Labs.

If the prevalence of this Hayekian thinking about technological innovation really does mean that we’re less able now to introduce major, world-changing innovations than we were 50 years ago, this would matter a great deal. One way of thinking about this is in evolutionary terms – if technological innovation is only able to proceed incrementally, there’s a risk that we’re less able to adapt to sudden shocks, we’re less able to anticipate the future and we’re at risk of being locked into technological trajectories that we can’t alter later in response to unexpected changes in our environment or unanticipated consequences. I’ve written earlier about the suggestion that, far from seeing universal accelerating change, we’re currently seeing innovation stagnation. The risk is that we’re seeing less in the way of really radical innovation now, at a time when pressing issues like climate change, peak cheap oil and demographic transitions make innovation more necessary than ever. We are seeing a great deal of very rapid innovation in the world of information, but this rapid pace of change in one particular realm has obscured much less rapid growth in the material realm and the biological realm. It’s in these realms that slow timescales and the large scale of the effort needed mean that the market seems unable to deliver the innovation we need.

It’s not going to be possible, nor would it be desirable, for us to return to the political economies of the mid-twentieth century warfare states that delivered the new technologies that underlie our current economies. Whatever other benefits the turn to free markets may have delivered, it seems to have been less effective at providing radical innovation, and with the need for those radical innovations becoming more urgent, some rethinking is now urgently required.

Slouching towards an industrial policy

The UK’s Science Minister, David Willetts, gave a speech last week on “Our High Tech Future”. The headlines about it were dominated by one somewhat odd policy announcement, which I’ll come to later, but what’s more interesting is the fact that he chose (apparently at quite short notice) to give the speech at all, only weeks after the publication of a strategy for “Innovation and Research for Growth”, that was widely regarded as, at best, a retrospective attempt to give coherence to a series of rather random acts of policy. I’m tempted to interpret the speech as a signal that a not completely formed government policy is still evolving in some quite interesting directions. In short, after 32 years, the Conservatives are rediscovering the need for industrial policy.
Continue reading “Slouching towards an industrial policy”

Science in hard times

How should the hard economic times we’re going through affect the amount of money governments spend on scientific and technological research? The answer depends on your starting point – if you think that science is an optional extra that we do if we’re prosperous, then decreasing prosperity must inevitably mean we can afford to do less science. But if you think that our prosperity depends on the science we do, then if growth is starting to stall, that’s a signal telling you to devote more resources to research. This is a huge oversimplification, of course; the link between science and prosperity can never be automatic. How effective that link will be will depend on the type of science and technology you support, and on the nature of the wider economic system that translates innovations into economic growth. It’s worth taking a look at recent economic history to see some of the issues at play.

Plot of UK real GDP per person and government R&D spend
UK Government spending on research and development compared to the real growth in per capita GDP.

R&D data (red) from the Royal Society Report The Scientific Century adjusted to constant 2005 £s. GDP per person data (blue) from Measuring Worth. Dotted blue line – current projections from the November 2011 forecast of the UK Office of Budgetary Responsibility (uncorrected for population changes).

The graph shows both the real GDP per person in the UK from 1946 up to the present, together with the amount of money, again in real terms, spent by the government on research and development. The GDP graph tells an interesting story in itself, making very clear the discontinuity in economic policy that happened in 1979. In this year Margaret Thatcher’s new Conservative government overthrew a thirty year broad consensus, shared by both parties, on how the economy should be managed. Before 1979, we had a mixed economy, with substantial industrial sectors under state control, highly regulated financial markets, including controls on the flow of capital in and out of the country, and the macro-economy governed by the principles of Keynesian demand management. After 1979, it was not Keynes, but Hayek, who supplied the intellectual underpinning, and we saw progressive privatisation of those parts of the economy under state control, the abolition of controls on capital movements and deregulation of financial markets. In terms of economic growth, measured in real GDP per person, the period between 1946 and 1979 was remarkable, with a steady increase of 2.26% per year – this is, I think, the longest sustained period of high growth in the modern era. Since 1979, we’ve seen a succession of deep recessions, followed by periods of rapid, and evidently unsustainable growth, sustained by asset price bubbles. The peaks of these periods of growth have barely attained the pre-1979 trend line, while in our current economic travails we find ourselves about 9% below trend. Not only does there seem no imminent prospect of the rapid growth we’d need to return to that trend line, but there now seems to be a likelihood of another recession.

The plot for public R&D spending tells its own story, which also shows a turning point with the Thatcher government. From 1980 until 1998, we see a substantial long-term decline in research spending, not just as a fraction of GDP, but in absolute terms; since 1998 research spending has increased again in real terms, though not substantially faster than the rise in GDP over the same period. Underlying the decline were a number of factors. There was a real squeeze on spending in research in Universities, well remembered by those who were working in them at the time. Meanwhile the research spending in those industries that were being privatised – such as telecommunications and energy – was removed from the government spending figures. And the activities of government research laboratories – particularly those associated with defense and the nuclear industry – were significantly wound down. Underlying this winding down of research was both a political motive and an ideological one. Big government spending on high technology was associated with the corporate politics of the 1960’s, subscribed to by both parties but particularly associated with Labour, and the memorable slogan “The White Heat of Technology”. To its detractors this summoned up associations with projects like the supersonic passenger aircraft Concord, a technological triumph but a commercial disaster. To the adherents of the Hayekian free market ideology that underpinned the Thatcher government, the state had no business doing any research but the most basic and far from market. In fact, state-supported research was likely to be not only less efficient and less effectively directed than research in the private sector, but by “squeezing out” such private sector research it would actually make the economy less efficient.

The idea that state support of research reduces support of research by the private sector by “squeezing out” remains attractive to free market ideologues, but the empirical evidence points to the opposite conclusion – state spending and private sector spending on research support each other, with increases in state R&D spending leading to increases in R&D by business (see for example Falk M (2006). What drives business research and development intensity across OECD countries? (PDF), Applied Economics 38 p 533). Certainly, in the UK, the near-halving of government R&D spend between 1980 and 1999 did not lead to an increase in R&D by business; instead, this also fell from about 1.4% of GDP to 1.2%. Not only did those companies that had been privatised substantially reduce their R&D spending, but other major players in industrial R&D – such as the chemical company ICI and the electronics company GEC – substantially cut back their activities. At the time many rationalised this as the inevitable result of the UK economy changing its mix of sectors, away from manufacturing towards service sectors such as the financial service industry.

None of this answers the questions: how much should one spend on R&D, and what difference do changes in R&D spend make to economic performance? It is certainly clear that the decline in R&D spending in the UK isn’t correlated with any improvement in its economic performance. International comparisons show that the proportion of GDP spent on R&D in the UK is significantly lower than most of its major competitors, and within this the proportion of R&D supported by business is itself unusually low . On the other hand, the performance of the UK science base, as measured by academic measures rather than economic ones, is strikingly good. Updating a much-quoted formula, the UK accounts for 3% of the total world R&D spend, it has 4.3% of the world’s researchers, who produce 6.4% of the world’s scientific articles, which attract 10.9% of the world’s citations and produce 13.8% of the world’s top 1% of highly cited papers (these figures come from the analysis in the recent report The International Comparative Performance of the UK Research Base).

This formula is usually quoted to argue for the productivity and effectiveness of the UK research base, and it clearly tells a powerful story about its strength as measured in purely academic terms. But does this mean we get the best out of our research in economic terms? The partial recovery in government R&D spending that we saw from 1998 until last year brought real terms increases in science budgets (though without significantly increasing the fraction of GDP spent on science). These increases were focused on basic research, whose importance as a proportion of total government science spending doubled between 1986 and 2005. This has allowed us to preserve the strength of our academic research base, but the decline in more applied R&D in both government and industrial laboratories has weakened our capacity to convert this strength into economic growth.

Our national economic experiment in deregulated capitalism ended in failure, as the 2008 banking collapse and subsequent economic slump has made clear. I don’t know how much the systematic running down of our national research and development capability in the 1980’s and 1990’s contributed to this failure, but I suspect that it’s a significant part of the bigger picture of misallocation of resources associated with the booms and the busts, and the associated disappointingly slow growth in economic productivity.

What should we do now? Everyone talks about the need to “rebalance the economy”, and the government has just released an “Innovation and Research Strategy for Growth”, which claims that “The Government is putting innovation and research at the heart of its growth agenda”. The contents of this strategy – in truth largely a compendium of small-scale interventions that have already been announced, which together still don’t fully reverse last year’s cuts in research capital spending – are of a scale that doesn’t begin to meet this challenge. What we should have seen is, not just a commitment to maintain the strength of the fundamental science base, important though that is, but a real will to reverse the national decline in applied research.

Good capitalism, bad capitalism and turning science into economic benefit

Why isn’t the UK more successful at converting its excellent science into wealth creating businesses? This is a perennial question – and one that’s driven all sorts of initiatives to get universities to handle their intellectual property better, to develop closer partnerships with the private sector and to create more spinout companies. Perhaps UK universities shied away from such activities thirty years ago, but that’s not the case now. In my own university, Sheffield, we have some very successful and high profile activities in partnership with companies, such as our Advanced Manufacturing Research Centre with Boeing, shortly to be expanded as part of an Advanced Manufacturing Institute with heavy involvement from Rolls Royce and other companies. Like many universities, we have some interesting spinouts of our own. And yet, while the UK produces many small high tech companies, we just don’t seem to be able to grow those companies to a scale where they’d make a serious difference to jobs and economic growth. To take just one example, the Royal Society’s Scientific Century report highlighted Plastic Logic, a company based on great research from Richard Friend and Henning Sirringhaus from Cambridge University making flexible displays for applications like e-book readers. It’s a great success story for Cambridge, but the picture for the UK economy is less positive. The company’s Head Office is in California, its first factory was in Leipzig and its major manufacturing facility will be in Russia – the latter fact not unrelated to the fact that the Russian agency Rusnano invested $150 million in the company earlier this year.

This seems to reflect a general problem – why aren’t UK based investors more willing to put money into small technology based companies to allow them to grow? Again, this is something people have talked about for a long time, and there’ve been a number of more or less (usually less) successful government interventions to address the issue. Only the latest of these was announced at the Conservative party conference speech by the Chancellor of the Exchequer, George Osborne – “credit easing” to “help solve that age old problem in Britain: not enough long term investment in small business and enterprise.”

But it’s not as if there isn’t any money in the UK to be invested – so the question to ask isn’t why money isn’t invested in high tech businesses, it is why money is invested in other places instead. The answer must be simple – because those other opportunities offer higher returns, at lower risk, on shorter timescales. The problem is that many of these opportunities don’t support productive entrepreneurship, which brings new products and services to people who need them and generates new jobs. Instead, to use a distinction introduced by economist William Baumol (see, for example, his article Entrepreneurship: Productive, Unproductive, and Destructive, PDF), they support unproductive entrepreneurship, which exploits suboptimal reward structures in an economy to make profits without generating real value. Examples of this kind of activity might include restructuring companies to maximise tax evasion, speculating in financial and property markets when the downside risk is shouldered by the government, exploiting privatisations and public/private partnerships that have been structured to the disadvantage of the tax-payer, and generating capital gains which result from changes in planning and tax law.

Most criticism of this kind of bad capitalism focuses on issues of fairness and equity, and on the damage to the democratic process done by the associated lobbying and influence-peddling. But it causes deeper problems than this – money and effort used to support unproductive entrepreneurship is unavailable to support genuine innovation, to create new products and services that people and society want and need. In short, bad capitalism crowds out good capitalism, and innovation suffers.

Some questions for British research policy

This piece is based on a summing-up I did at a meeting in London this March: A New Mandate? Research Policy in the 21st Century.

There seem to be two lurking worries that concern people in science policy in the UK at the moment. The first is the worry that, having built a case for state support of science on the basis that this will lead to innovation and economic growth, that innovation and economic growth may not be delivered. The second is that the scientific enterprise doesn’t have a sufficiently broad base of popular support. In short, are we suffering from an innovation deficit, and does our research effort have a democratic deficit?

An innovation deficit

The letter with the funding settlement from BIS to the Research Councils called for “even more impact” – the impact agenda in research councils and funding agencies really is accompanied by a sense of increased urgency of an argument that is by no means settled.

To many scientists the economic case for supporting science may seem self-evident, but the solid evidence in support of this is surprisingly slippery. There is certainly the feeling in some quarters – and not just the Guardian’s Simon Jenkins – that the economic impact of science has been oversold. The Royal Society’s “The Scientific Century” document was a serious attempt to assemble the evidence. What strikes me, though, is that it doesn’t make a great deal of sense to try and give an answer to the primary question – to what extent should the state support science – without considering the much broader question of how our political and economic system is set up to support innovation.

And it is in relation to innovation that there are some more general worries, both at a global level and in our own national circumstances:

  • Is the rate of innovation actually slowing – leaving aside the special case of information technology, have the easiest gains from new technology already been made? I discussed this in an earlier post Accelerating Change or Innovation Stagnation?
  • Is our UK innovation system broken? In the UK postwar settlement, universities were only one of a number of kinds of places where research – especially more applied research – was carried out. Major conglomerates like ICI and GEC had large corporate laboratories, there were major government laboratories associated with organisations like the Atomic Energy Authority, and the military supported laboratories like RSRE Malvern which combined quite basic research with more strategic research and development. In the post-Thatcher climate of privatisation, deregulation and the drive to “unlock shareholder value” most of these alternative research organisations have disappeared.
  • In their place, we see a new emphasis on the development of protectable intellectual property in Universities with a view to creating venture-capital backed spin-out companies. This gives rise to two questions – how effective is this as a mechanism for technology transfer, and does the new emphasis on protectable IP have any deleterious effects on innovation itself? Certainly, the experience of nano- and bio- technology does point to potential problems of patent thickets and an “anti-commons” effect in academia, where pre-existing IP positions inhibit other scientists from working in particular areas. It’s these worries, among other factors, that have driven a move to a more open-source approach, now spreading from IT to new areas like synthetic biology.
  • For the UK, the pharmaceutical industry has been particularly important, as an industry of genuinely international stature which has been politically very important in making the case for state-supported science (and influencing the shape of that support). So the fact that this industry is having innovation difficulties of its own – the closure of the Pfizer R&D site at Sandwich being a very visible signal of this – is worrying.
  • We’re seeing the introduction of a new kind of institution into the innovation landscape – the Technology and Innovation Centres. There’s still uncertainty about their role and some governance issues are still unclear, but what’s most significant is that there is a widely perceived gap that they are intended to fill.
  • A democratic deficit

    The idea that we’re in the midst of a popular crisis in trust in science is deeply embedded. I’m not convinced that the crisis in trust is with science itself, rather than the use of science in politics and commerce, which is something slightly different, but nonetheless this idea has been a driving force for much of the new enthusiasm for public engagement and dialogue, and for taking this public engagement upstream. While some people (including me) would want to set this move as part of a broader move to steer technology to meet widely shared societal goals, there is still a sense that for many, this is still seen as being about gaining acceptance for new technologies.

    On the face of it, these two worries – of an innovation deficit and of a democratic deficit – look to be in opposition. The idea of an innovation deficit suggests that our problem is that technology isn’t moving fast enough, and we have to work to remove obstacles in the way of innovation, while the negative perception of public engagement holds that its job is to put those obstacles back in the way. In fact, in times like now this perception is a real danger.

    But actually they’re quite closely connected. Underneath these dilemmas are two worries – a loss of confidence in the self-organising capability of the scientific enterprise, and a sense that something’s missing in our innovation system.

    Research councils – “from funder to sponsor”

    It’s these worries that underly current moves in the UK research councils, perhaps most explicitly defined by EPSRC, in their aim of “moving from funder to a sponsor” – i.e. moving from the position of responding to the agenda of the scientific community, towards commissioning research in support of national needs.

    The issues then are, how is national need defined, and how is the process of defining that national need given legitimacy?

    This is a big problem in our current system, where our political fashion is explicitly not to define such a need in anything other than rather general and vacuous terms (like saying we need to have a “knowledge economy”). To pose the question in its most pointed form, does it make sense to have a science policy if you don’t have an industrial policy?

    This situation puts research councils in a very difficult position. If governments are not prepared to develop such an industrial policy, how can the research councils do this – how can they do it practically, and how can their decisions acquire legitimacy?

    These legitimacy problems come in three directions:
    1. with the scientific community
    2. with the government
    3. with the population at large.

    The scientific community will see a potential clash with the Haldane principle (invented tradition though David Edgerton says this is), which could be interpreted as saying that the scientific community is the primary source, as an embodiment of the principle of autonomy of the scientific enterprise.

    With the government, a research council like EPSRC is in a very difficult position. They have to deliver the science in support of a national policy which does not, in fact, exist, but they will be judged by very instrumental measures of wealth creation.

    Can “challenge-led” research help?

    “Societal challenges” offer a new synthesis that can be considered a response to this. I find this attractive as a way of getting beyond a sterile dichotomy between applied and basic research, but the definitions of what might be meant by a societal challenge are contested, value-laden and full of interpretive flexibility.

    Societal challenges do have an advantage, in having a certain security in the face of political uncertainty and lack of direction, and a certain independence from political whims. Who can really disagree with the idea that sustainable energy will be a big deal on rather long timescales, for example?

    But there are problems – can governments genuinely take a long enough view? How can we avoid fads and the herd mentality? How can we be prepared for the inevitable unanticipated changes in direction in world events? how can we move from generalities to the particularities of real technologies?

    What is the place of public engagement? On the one hand, what better way of getting a direct view about what national need should be than consulting the public directly? Public engagement then presents itself as a partial solution to the problem of legitimacy, but one that isn’t necessarily going to make their relationship with government any easier.

    There is one other set of institutions that, strangely, don’t get mentioned very often. Those are the Universities. What’s their role? Can they be more than just a loose coalition of individual researchers responding to the incentives and demands of the research councils and other funders? Universities have their own considerable intellectual resources across the disciplines, and they have their own long history and independence, so one might hope that Universities themselves could be another focus for reasserting the public value of research. For a civic university like my own, Sheffield, surely the University should as a focus for the aspirations of the community it serves.

    Science and politics

    There is another driving force for public engagement; the sense that representative government is failing to provide a space for discussing big issues about our future choices and how people want to live their lives. Science and technology have to be a part of this discussion, and this is why discussions about science and technology must have a political dimension. There are those who assert the opposite – that science doesn’t have or shouldn’t have a political dimension, and that technology is autonomous, out of control, and can’t be directed. But these assertions are themselves profoundly political statements.