Why R&D matters

The takeover bid for the UK/Swedish pharmaceutical company AstraZeneca by US giant Pfizer has given rare political prominence to the issue of UK-based research and development capacity. Underlying much opposition to the deal is the fear that the combined entity will seek to cut costs, and that R&D expenditure will be first in the firing line. This fear is entirely well-founded; since Pfizer took over Wyeth in 2009 it has reduced total R&D spend from $11bn to $6.7bn, and in the UK Pfizer’s cost-cutting reputation was sealed by the closure of its Sandwich R&D facility in 2011. Nor is the importance of AstraZeneca to UK R&D capacity overstated. In the latest EU R&D scoreboard, of the top world 100 companies by R&D expenditure, only 2 are British. One of these is AstraZeneca, and the other GSK. And, if the deal goes ahead and does result in a significant reduction in UK R&D capacity, it wouldn’t be an isolated event. It would be the culmination of a 30 year decline in UK business R&D intensity, which has taken the UK from being one of the most R&D intensive economies in the developed world, to one of the least.

My recent paper “The UK’s Innovation Deficit and How to repair it” analysed this decline in detail and related it to changes in the wider political economy. One response I’ve had to the paper was to regard this decline in R&D intensity as something to be welcomed. In this view, R&D is a legacy of an earlier era of heavy industry and monolithic corporations, now obsolete in a world of open innovation, where valuable intellectual property is more likely to be a brand identity than a new drug or a new electronic device.

I think this view is quite wrong. This doesn’t mean that I think that those kinds of innovation that arise without formal research and development are not important; innovations in the way we organise ourselves, to give one example, can create enormous value. Of course, R&D in its modern sense is just such a social innovation. The product of the late nineteenth and early twentieth centuries, pioneered in the early chemical and electrical industries, it was a way of systematising invention, of bringing together talented and trained people with capital intensive research infrastructure, to harness new developments in science with a focus not on discovery, but on developing new processes and products in response to the opportunities presented to the business. Academic scientists typically underestimate the importance and difficulty of the D part of R&D. But without Carl Bosch, who did the chemical engineering that took Haber’s new chemistry for fixing nitrogen for explosives and fertilisers and made it into an industrial scale process, the Haber-Bosch process would not have transformed the world in the way it did. And, to bring the story up to date, Fert and Grünberg got the Nobel prize for discovering giant magnetoresistance, but it was IBM’s Stuart Parkin who turned that discovery into the basis for high density disk drives, thereby giving the world its fist iPods. R&D was the social innovation that allowed us, in Neal Stephenson’s phrase, to Get Big Stuff Done.

But economic theory and the lessons of history both tell us that, in a purely free market economy, very little R&D would get done at all. A company undertaking R&D can very rarely capture the full social gains from a new product in a truly free market, because successful inventions can be copied for much less investment than the cost of the first company’s R&D. This means that in a free market, with all actors behaving rationally, everybody ends up worse off, not benefiting from the innovations that the foregone R&D would have led to.

For R&D to happen at anything like the scale that brings the optimum benefits of innovation to society, you need to go beyond the market. One way is for the company doing the R&D to be guaranteed a monopoly position, either by the state granting them explicitly through patent rights, or through companies colluding with each other to create cartels (as the world’s chemical companies did before the 2nd world war). Companies can have effective monopolies as a result of regulation, giving them revenues that can be recycled into R&D – in the way that the US telephone system gave us Bell Labs. The state can guarantee markets for the products that result from R&D, as is common with the development of military hardware. The state can also directly pay for the R&D itself, through grants and contracts, or it can part-pay indirectly, through tax breaks and soft loans. None of these methods is perfect, they vary both in their effectiveness and in the degree to which they have undesirable side-effects. But the closer one gets to pure market conditions, the less R&D one ends up having.

To return to the potential takeover of AstraZeneca by Pfizer, it is clear that this would result in a reduction in R&D capacity, and that would be a bad thing. But we should look beyond the specific case to the systemic problems that have led to this situation. Given the incentives that are currently in place, it probably is rational for pharmaceutical companies to cut their R&D budgets – perhaps very substantially. The R&D costs of developing each new drug continue to escalate, the risks are high, and the returns to the company look distant and uncertain. And yet, it is society more widely that bears the cost of a reduction in R&D intensity; the wider social return that we forego, in the treatments that are left undeveloped, is very much greater than the gain to the shareholders.

We need R&D to be able to make radical technological innovations. Basic science alone is not enough. Indeed, we need a thriving R&D capacity to be able to make the most of the science we do. This emerges clearly from a recent report, The Economic Significance of the UK Science Base . This demonstrates by an an econometric analysis of the data, that the return on basic research expenditure is positively correlated with wider R&D capacity. A country like the UK, which aspires to an advanced, knowledge economy, and with a comparative advantage in science, cannot afford to be complacent about the erosion of its R&D capacity.

A slowing rate of technological innovation will lead to economic stagnation – perhaps, as my last post suggested, this is already starting to happen. Innovation is possible without formal R&D, but there are some big problems that need addressing with the scale and focus of effort that only R&D can bring. A few clever people can put together a new social media application, and if they’re very lucky make a lot of money from it. But this is not the way that we will develop a successful therapy for Alzheimer’s disease, or new antibiotics to combat antibiotic resistance. For that you need the kind of large scale R&D capacity that the pharmaceutical industry can deploy. So, it is right to oppose the take-over of AstraZeneca by Pfizer, but we should also try and fix the deeper problems with R&D.

2 Responses to “Why R&D matters”

  1. Dave from Canada says:

    I’m not so sure R&D has changed anything at all.

    One hundred years ago, the cheapest way to generate electricity was to take coal out of the ground and generate electricity using a steam turbine. After one hundred of years of supposed technological innovation, the cheapest way to generate electricity is to take coal out of the ground and generate electricity using a steam turbine. The rapid growth of China was not powered by wind, solar, nuclear fission, natural gas, or oil. It was powered by the same fuel that powered the British Industrial Revolution: coal.

    Perhaps the most important understanding related to my viewpoint is that the developing world (an inceasingly dominant component of the global average) will continue to prefer highly practical and rapidly scalable solutions and that ideologically attractive clean energy solutions touted by so many developed world citizens partaking in the energy and climate debate are still very far from trumping fossil fuels in this regard.

    What matters is which infrastructure is very rapidly scalable, primarily due to low up-front costs and well-established best practices.

  2. Richard Jones says:

    China is following the well-trodden path, pioneered by Japan and Korea before it, of fast catch-up growth, driven by first copying and implementing at scale technologies developed elsewhere, and then developing their own R&D when they reach the technological frontier themselves. A generation ago Korea was a third world country, and now it is, I think, the most R&D intensive country in the world.

    The industries China chose to develop first are exactly those ones that benefitted from the invention of R&D in the West in the late 19th and early 20th centuries – chemicals, steel, electrical engineering and then electronics (Vaclav Smil’s two books, “Creating the 20th Century” and “Powering the 20th Century” are great overviews of how these developed), powered, not by steam engines, as the UK industrial revolution was, but by electricity (itself, as you say, generated by coal). Currently China is having a massive expansion of nuclear power, with 29 reactors currently under construction, 57 more planned and a further 118 proposed. These are predominantly copies of an old Westinghouse design. China is already the largest producer of solar cells in the world and within a year or so will overtake Germany to have the largest installed base of PV. Again, this is catch-up growth so based on other people’s R&D – I think in this case largely Japanese. But the R&D intensity of the Chinese economy is growing very fast, so it would be very unwise to imagine that China will continue to depend on technology developed elsewhere, in any sector, for very long.