The Physics of Economics

This is the first of two posts which began life as a single piece with the title “The Physics of Economics (and the Economics of Physics)”. In the first section, here, I discuss some ways physicists have attempted to contribute to economics. In the second half, I turn to the lessons that economics should learn from the history of a technological innovation with its origin in physics – the semiconductor industry.

Physics and economics are two disciplines which have quite a lot in common – they’re both mathematical in character, many of their practitioners are not short of intellectual self-confidence – and they both have imperialist tendencies towards their neighbouring disciplines. So the interaction between the two fields should be, if nothing else, interesting.

The origins of econophysics

The most concerted attempt by physicists to colonise an area of economics is in the area of the behaviour of financial markets – in the field which calls itself “econophysics”. Actually at its origins, the traffic went both ways – the mathematical theory of random walks that Einstein developed to explain the phenomenon of Brownian motion had been anticipated by the French mathematician Bachelier, who derived the theory to explain the movements of stock markets. Much later, the economic theory that markets are efficient brought this line of thinking back into vogue – it turns out that financial markets can be quite often modelled as simple random walks – but not quite always. The random steps that markets take aren’t drawn from a Gaussian distribution – the distribution has “fat tails”, so rare events – like big market crashes – aren’t anywhere like as rare as simple theories assume.

Empirically, it turns out that the distributions of these rare events can sometimes be described by power laws. In physics power laws are associated with what are known as critical phenomena – behaviours such as the transition from a liquid to a gas or from a magnet to a non-magnet. These phenomena are characterised by a certain universality, in the sense that the quantitative laws – typically power laws – that describe the large scale behaviour of these systems doesn’t strongly depend on the details of the individual interactions between the elementary objects (the atoms and molecules, in the case of magnetism and liquids) whose interaction leads collectively to the larger scale phenomenon we’re interested in.

For “econophysicists” – whose background often has been in the study of critical phenomenon – it is natural to try and situate theories of the movements of financial markets in this tradition, finding analogies with other places where power laws can be found, such as the distribution of earthquake sizes and the behaviour of sand-piles. In terms of physicists’ actual impact on participants in financial markets, though, there’s a paradox. Many physicists have found (often very lucrative) employment as quantitative traders, but the theories that academic physicists have developed to describe these markets haven’t made much impact on the practitioners of financial economics, who have their own models to describe market movements.

Other ideas from physics have made their way into discussions about economics. Much of classical economics depends on ideas like the “representative household” or the “representative firm”. Physicists with a background in statistical mechanics recognise this sort of approach as akin to a “mean field theory”. The idea that a complex system is well represented by its average member is one that can be quite fruitful, but in some important circumstances fails – and fails badly – because the fluctuations around the average become as important as the average itself. This motivates the idea of agent based models, to which physicists bring the hope that even simple “toy” models can bring insight. The Schelling model is one such very simple model that came from economics, but which has a formal similarity with some important models in physics. The study of networks is another place where one learns that the atypical can be disproportionately important.

If markets are about information, then physics should be able to help…

One very attractive emerging application of ideas from physics to economics concerns the place of information. Friedrich Hayek stressed the compelling insight that one can think of a market as a mechanism for aggregating information – but a physicist should understand that information is something that can be quantified, and (via Shannon’s theory) that there are hard limits on how much information can transmitted in a physical system . Jason Smith’s research programme builds on this insight to analyse markets in terms of an information equilibrium[1].

Some criticisms of econophysics

How significant is econophysics? A critique from some (rather heterodox) economists – Worrying trends in econophysics – is now more than a decade old, but still stings (see also this commentary from the time from Cosma Shalizi – Why Oh Why Can’t We Have Better Econophysics? ). Some of the criticism is methodological – and could be mostly summed up by saying, just because you’ve got a straight bit on a log-log plot doesn’t mean you’ve got a power law. Some criticism is about the norms of scholarship – in brief: read the literature and stop congratulating yourselves for reinventing the wheel.

But the most compelling criticism of all is about the choice of problem that econophysics typically takes. Most attention has been focused on the behaviour of financial markets, not least because these provide a wealth of detailed data to analyse. But there’s more to the economy – much, much more – than the financial markets. More generally, the areas of economics that physicists have tended to apply themselves to have been about exchange, not production – studying how a fixed pool of resources can be allocated, not how the size of the pool can be increased.

[1] For a more detailed motivation of this line of reasoning, see this commentary, also from Cosma Shalizi on Francis Spufford’s great book “Red Plenty” – “In Soviet Union, Optimization Problem Solves You”.

Between promise, fear and disillusion: two decades of public engagement around nanotechnology

I’m giving a talk with this title at the IEEE Nanotechnology Materials and Devices Conference (NMDC) in Portland, OR on October 15th this year. The abstract is below, and you can read the conference paper here: Between promise, fear and disillusion (PDF).

Nanotechnology emerged as a subject of public interest and concern towards the end of the 1990’s. A couple of decades on, it’s worth looking back at the way the public discussion of the subject has evolved. On the one hand we had the transformational visions associated with the transhumanist movement, together with some extravagant promises of new industries and medical breakthroughs. The flipside of these were worries about profound societal changes for the worse, and, less dramatically, but the potential for environmental and health impacts from the release of nanoparticles.

Since then we’ve seen some real achievements in the field, both scientific and technological, but also a growing sense of disillusion with technological progress, associated with slowing economic growth in the developed world. What should we learn from this experience? What’s the right balance between emphasising the potential of emerging technologies and cautioning against over-optimistic claims?

Read the full conference paper here: Between promise, fear and disillusion (PDF).

The UK’s top six productivity underperformers

The FT has been running a series of articles about the UK’s dreadful recent productivity performance, kicked off with this very helpful summary – Britain’s productivity crisis in eight charts. One important aspect of this was to focus on the (negative) contribution of formerly leading sectors of the economy which have, since the financial crisis, underperformed:

“Computer programming, energy, finance, mining, pharmaceuticals and telecoms — which together account for only one-fifth of the economy — generated three-fifths of the decline in productivity growth.”

The original source of this striking statistic is a paper by Rebecca Riley, Ana Rincon-Aznar and Lea Samek – Below the Aggregate: A Sectoral Account of the UK Productivity Puzzle.

What this should stress is that there’s no single answer to the productivity crisis. We need to look in detail at different industrial sectors, different regions of the UK, and identify the different problems they face before we can work out the appropriate policy responses.

So what can we say about what’s behind the underperformance of each of these six sectors, and what lessons should policy-makers learn in each case? Here are a few preliminary thoughts.

Mining. This is dominated by North Sea Oil. The oil is running out, and won’t be coming back – production peaked in 2000; what oil is left is more expensive and difficult to get out.
Lessons for policy makers: more recognition is needed that the UK’s prosperity in the 90’s and early 2000’s depended as much on the accident of North Sea oil as any particular strength of the policy framework.

Finance. It’s not clear to me how much of the apparent pre-crisis productivity boom was real, but post-crisis increased regulation and greater capital requirements have reduced apparent rates of return in financial services. This is as it should be.
Lessons for policy makers: this sector is the problem, not the solution, so calls to relax regulation should be resisted, and so-called “innovation” that in practise amounts to regulatory arbitrage discouraged.

The end of North Sea oil and the finance bubble cannot be reversed – these are headwinds that the economy has to overcome. We have to find new sources of productivity growth rather than looking back nostalgically at these former glories (for example, there’s a risk that the enthusiasm for fracking and fintech represent just such nostalgia).

Energy. Here, a post-privatisation dysfunctional pseudo-market has prioritised sweating existing assets rather than investing. Meanwhile there’s been an unclear and inconsistent government policy environment; sometimes the government has willed the ends without providing the means (e.g. nuclear new build), elsewhere it has introduced perverse and abrupt changes of tack (e.g. in its support for onshore wind and solar).
Lessons for policy makers: develop a rational, long-term energy strategy that will deliver the necessary decarbonisation of the energy economy. Then stick to it, driving innovation to support the strategy. For more details, read chapter 4 – Decarbonisation of the energy economy – of the Industrial Strategy Commission’s final report.

Computer programming. Here I find myself on less sure ground. Are we seeing the effects of increasing overseas outsourcing and competition, for example to India’s growing IT industry? Are we seeing the effect of more commoditisation of computer programming, with new business models such as “software as a service”?

Telecoms. Again, here I’m less certain of what’s been going on. Are we seeing the effect of lengthening product cycles as the growth in processor power slows? Is this the effect of overseas competition – for example, rapidly growing Chinese firms like Huawei – moving up the value chain? Here it’s also likely that measurement problems – in correctly accounting for improvements in quality – will be most acute.

Pharmaceuticals. As my last blogpost outlined, productivity growth in pharmaceuticals depends on new products being developed through formal R&D, their value being protected by patents. There has been a dramatic, long-term fall in the productivity of pharma R&D, so it is unsurprising that this is now feeding through into reduced labour productivity.
Lessons for policy makers: see the recent NESTA report “The Biomedical Bubble”.

Many of these issues were already discussed in my 2016 SPERI paper Innovation, research and the UK’s productivity crisis. Two years on, the productivity crisis seems even more pressing, and as the FT series illustrates, is receiving more attention from policy makers and economists (though still not enough, in view of its fundamental importance for living standards and fiscal stability). The lesson I would want to stress is that, to make progress, policy makers and economists need to go beyond generalities, and pay more attention to the detailed particulars of individual industries, sectors and regions, and the different way innovation takes place – or hasn’t being taking place – within them.

Productivity: in R&D, healthcare and the whole economy

This is a slightly adapted extract from The Biomedical Bubble: Why UK research and innovation needs a greater diversity of priorities, politics, places and people, my report for NESTA, with James Wilsdon.

Productivity is a measure of the efficiency with which inputs are converted into outputs of value – increasing productivity lets us get more from less. We talk about different kinds of productivity in our report:

● Economic productivity, at the level of the nation, regions and industry sectors, most usefully expressed as labour productivity;
● R&D productivity: the effectiveness with which research and development expenditure translates into new products and processes and thus economic value;
● Healthcare productivity: the effectiveness with which given inputs of money and labour produce improved health outcomes.

The UK’s productivity problem

The performance of the whole national economy is measured by labour productivity – the value of the goods and services (as measured by GDP) produced by an (average) hour of work. Increases in labour productivity arise from a combination of capital investment and technological progress, and are the fundamental drivers of economic growth and increasing living standards.


Labour productivity since 1970. ONS, January 2018 release.

Labour productivity in the UK has stagnated since the global financial crisis of 2007/8 : currently it’s some 15-20% below what would be expected if the pre-crisis trend had continued, the worst performance for at least a century . It’s this stagnation of labour productivity that sets our overall economic environment, leading directly to wage stagnation and a persistently challenging fiscal situation for the government, which has responded with sustained austerity.

The overall labour productivity of the economy is an aggregate; we can decompose it to consider the contribution of different geographical regions or industry sectors. A regional breakdown reveals how geographically unbalanced the UK economy is. London dominates, with labour productivity 33% above the UK average. Of the other regions, only the South East is above the national average. Wales and Northern Ireland are 17% below the UK average, with other regions in the English North and Midlands between 7 and 15% below average.

The pharmaceutical industry’s contribution to overall productivity growth – from leader to laggard

There’s a very wide dispersion of labour productivity across industrial sectors. In understanding their contribution to the overall productivity puzzle, it’s important to consider both the level of labour productivity and the rate of growth. The pharmaceutical industry is particularly important to the UK here – its level of labour productivity is very high, so even though it only constitutes a relatively small part of the overall economy, shifts in its performance can have a material effect on the whole economy.

But recent years have seen a big fall in the rate of growth of labour productivity in the pharmaceutical industry [1]. Between 1999 and 2007, labour productivity in the pharmaceutical industry grew by 9.7% a year – this excellent performance made a material difference to the whole economy, contributing 0.11 percentage points to the total annual labour productivity growth in the pre-crisis economy of 2.8%. But between 2008 and 2015, labour productivity in pharma actually shrank by 11% a year, dragging down labour productivity growth in the whole economy.

The origins of the pharmaceutical industry’s productivity problem – falling R&D productivity

Labour productivity gains arise from the introduction of new, high value, products and improved processes. In the pharmaceutical industry, new products are created by research and development (R&D), with their value being protected by patents.

R&D productivity expresses the efficiency with which R&D produces value through new products and processes. This can be difficult to quantify: a new drug is the product of perhaps 15 years of R&D and for each successful drug produced many candidates fail. One simple measure is the number of new drugs produced for a given value of R&D; as the graph shows, on this measure R&D productivity has fallen substantially over the decades.


Exponentially falling R&D productivity in the pharmaceutical industry worldwide. Number of new molecules approved by FDA (pharma and biotech) per $bn global R&D spending. Plot after Scannell et al [2], with additional post-2012 data courtesy of Jack Scannell.

Falling R&D productivity explains falling labour productivity in pharmaceuticals, with a lag time that expresses the time it takes to develop and test new drugs. This will be exacerbated if the total volume of R&D falls as well, as it has begun to do in recent years.

The recent weak performance of the UK economy can be linked in part to its low overall R&D intensity , and this has been recognised by the government’s commitment to raise this to 2.4% of GDP. As I described in an earlier post – Making UK Research and Innovation work for the whole UK – R&D intensity varies strongly across the country, with these variations being correlated with regional economic performance. The commitment to raise the overall R&D intensity of the UK economy is welcome, but it will only deliver the hoped-for economic benefits if overall R&D productivity across all sectors can be maintained or increased.

Healthcare productivity – the pressure for improvements

The purpose of health-related research and development is not simply economic, however. We hope that research will improve people’s lives, reducing mortality and morbidity.

But we can’t avoid the economic dimension of healthcare either – the pressures on health service budgets are all too obvious in this time of continuing public austerity, so the idea that innovation – technological, social and organisational – can allow us to achieve the same or better healthcare outcomes for less money is compelling.

Healthcare productivity can be estimated by comparing inputs – labour, goods and services and capital expenditure – with some measure of the amount of treatment delivered. This needs to be adjusted for improved quality of care – for example, from improved survival rates, and measures of patient satisfaction. The ONS produces estimates of quality adjusted public service healthcare productivity , which show an average increase of 0.8% a year, between 1995 – 2015.

The context for this continuous improvement in healthcare productivity is an even larger increase in demand for healthcare . For example, between 2003/4 and 2015/16 there was an average annual rise in hospital admissions a year, driven by demographic changes – in particular – a 40% rise in the number of people aged 85 and over.

This demand pressure is likely to continue into the future, so without further increases in healthcare productivity, quality will suffer and costs will rise.

Labour productivity, R&D productivity, healthcare productivity – the vicious circle and how to break out of it

These three aspects of productivity are linked. Falling R&D productivity in pharmaceuticals has led to falling labour productivity in that industry. That in turn has made a material contribution to stagnant labour productivity across the whole economy. On the other hand, stagnant labour productivity in the whole economy has produced a government response of continuing austerity, putting pressure on health service budgets, and increasing the demand for improved healthcare productivity.

How can we break out of this trap? Improving the effectiveness and targeting of our R&D effort has to be central to this. Better R&D productivity will lead to improvements in labour productivity in pharmaceuticals, biotechnology and medical technology across the whole country, leading to sustained, geographically balanced economic growth. And if we do the right R&D to deliver improved healthcare productivity, that will lead to better health outcomes for everyone.

1. R. Riley, A. Rincon-Aznar, L. Samek, Below the Aggregate: A Sectoral Account of the UK Productivity Puzzle, ESCoE Discussion Papaer 2018-6 (May 2018)
https://www.escoe.ac.uk/wp-content/uploads/2018/05/ESCoE-DP-2018-06.pdf

2. Scannell, J. W., Blanckley, A., Boldon, H., & Warrington, B. (2012). Diagnosing the decline in pharmaceutical R&D efficiency, 1–10. http://doi.org/10.1038/nrd3681

More on the biomedical bubble

A couple more pieces reacting to my report for NESTA, with James Wilsdon – The Biomedical Bubble: Why UK research and innovation needs a greater diversity of priorities, politics, places and people.

The climate change activist Alice Bell picks up on a renewable energy aspect to the theme of research prioritisation, asking on the Guardian’s blog Is UK science and innovation up for the climate challenge?. “The government has shaken up the UK research system. But fossil fuels, not low-carbon technologies, still seem to be in the driving seat.”

The Financial Times picked up the report; an opinion piece from its science correspondent Anjana Ahuja says Britain must stop inflating the biomedical bubble (subscription required). “The drugs sector receives funding out of all proportion to the results it delivers.”

The biomedical bubble

I have a new report out, written with science policy expert James Wilsdon for the innovation foundation NESTA, entitled The Biomedical Bubble: Why UK research and innovation needs a greater diversity of priorities, politics, places and people. Here’s a summary of the report:

Biomedical science and innovation has benefited from significant increases in public investment over the past 15 years. This builds on the remarkable strengths of the UK’s academic life sciences base and pharmaceutical industry. But continuing to prioritise the biomedical, in a period when government aims to boost research and development (R&D) spending to 2.4 per cent of GDP, risks unbalancing our innovation system, and is unlikely to deliver the economic benefits or improvements to health outcomes that society expects.

For too long, the pharmaceutical and biotechnology sectors have dominated policy thinking about translating research, but these sectors are in deep trouble, with R&D productivity plummeting and R&D investment falling. Meanwhile, much of the wider innovation needed for the NHS, public health and social care has been under-resourced. Greater emphasis needs to be given to the social, environmental, digital and behavioural determinants of health, and decisions about research priorities need to involve a greater diversity of perspectives, drawn from across the country. The creation of UK Research and Innovation (UKRI), which aims to bring a more strategic approach to funding and prioritisation, is the right moment to rethink this balance. This paper sets out why and how the UK needs to escape the biomedical bubble if it is to realise the economic, social and health potential of extra investment in R&D.

There are some shorter pieces discussing different aspects of our arguments:

In the Guardian Political Science blog: It’s time to burst the biomedical bubble in UK research

On the WonkHE website: Building an industrial strategy around the pharma/biotech industry is a bet on the US healthcare system remaining unreformed.
Rethinking the life sciences strategy

On the ResearchProfessional website (subscription required), focussing on the task UKRI has balancing its portfolio:
Examine funding balance to pop ‘biomedical bubble’, UKRI told

A news piece about the report in the Times Higher:
UK’s biomedical research funding ‘bubble’ is ‘about to burst’

Bad Innovation: learning from the Theranos debacle

Earlier this month, Elizabeth Holmes, founder of the medical diagnostics company Theranos, was indicted on fraud and conspiracy charges. Just 4 years ago, Theranos was valued at $9 billion, and Holmes was being celebrated as one of Silicon Valley’s most significant innovators, not only the founder of one of the mythical Unicorns, but through the public value of her technology, a benefactor of humanity. How this astonishing story unfolded is the subject of a tremendous book by the journalist who first exposed the scandal, John Carreyrou. “Bad Blood” is a compelling read – but it’s also a cautionary tale, with some broader lessons about the shortcomings of Silicon Valley’s approach to innovation.

The story of Theranos

The story begins in 2003. Holmes had finished her first year as a chemical engineering student at Stanford. She was particularly influenced by one of her professors, Channing Robertson; she took his seminar on drug delivery devices, and worked in his lab in the summer. Inspired by this, she was determined to apply the principles of micro- and nano- technology to medical diagnostics, and wrote a patent application for a patch which would sample a patient’s blood, analyse it, use the information to determine the appropriate response, and release a controlled amount of the right drug. This closed loop system would combine diagnostics with therapy – hence Theranos, (from “theranostic”).

Holmes dropped out from Stanford in her second year to pursue her idea, encouraged by her professor, Channing Robertson. By the end of 2004, the company she had incorporated, with one of Robertson’s PhD students, Shaunak Roy, had raised $6 million from angels and venture capitalists.

The nascent company soon decided that the original theranostic patch idea was too ambitious, and focused on diagnostics. Holmes focused on the idea of doing blood tests on very small volumes – the droplets of blood you get from a finger prick, rather than the larger volumes you get by drawing blood with a needle and syringe. It’s a great pitch for those scared of needles – but the true promise of the technology was much wider than this. Automatic units could be placed in patients’ homes, cutting out all the delay and inconvenience of having to go to the clinic for the blood draw, and then waiting for the results to come back. The units could be deployed in field situations – with the US Army in Iraq and Afghanistan – or in places suffering from epidemics, like ebola or zika. They could be used in drug trials to continuously monitor patient reactions and pick up side-effects quickly.

The potential seemed huge, and so were the revenue projections. By 2010, Holmes was ready to start rolling out the technology. She negotiated a major partnership with the pharmacy chain Walgreens, and the supermarket Safeway had loaned the company $30 million with a view to opening a chain of “wellness centres”, built around the Theranos technology, in its stores. The US Army – in the powerful figure of General James Mattis – was seriously interested.

In 2013, the Walgreen collaboration was ready to go live; the company had paid Theranos a $100 million “innovation fee” and a $40 million loan on the basis of a 2013 launch. The elite advertising agency Chiat\Day, famous for their work with Apple, were engaged to polish the image of the company – and of Elizabeth Holmes. Investors piled in to a new funding round, at the end of which Theranos was valued at $9 billion – and Holmes was a paper billionaire.

What could go wrong? There turned out to be two flies in the ointment. Firstly, Theranos’s technology couldn’t do even half of what Holmes had been promising, and even on the tests it could do, it was unacceptably inaccurate. Carreyrou’s book is at its most compelling as he gives his own account of how he broke the story, in the face of deception, threats, and some very expensive lawyers. None of this would have come out without some very brave whistleblowers.

At what point did the necessary optimism about a yet-to-be developed technology turn first into self-delusion, and then into fraud? To answer this, we need to look at the technological side of the story.

The technology

As is clear from Carreyrou’s account, Theranos had always taken secrecy about its technology to the point of paranoia – and it was this secrecy that enabled the deception to continue for so long. There was certainly no question that they would be publishing anything about their methods and results in the open literature. But, from the insiders’ accounts in the book, we can trace the evolution of Theranos’s technical approach.

To go back to the beginning, we can get a sense of what was in Holmes’s mind at the outset from her first patent, originally filed in 2003. This patent – “Medical device for analyte monitoring and drug delivery” is hugely broad, at times reading like a digest of everything that anybody at the time was thinking about when it comes to nanotechnology and diagnostics. But one can see the central claim – an array of silicon microneedles would penetrate the skin to extract the blood painlessly, this would be pumped through 100 µm wide microfluidic channels, combined with reagent solutions, and then tested for a variety of analytes through detecting their binding to molecules attached to surfaces. In Holmes’s original patent, the idea was that this information would be processed, and then used to initiate the injection of a drug back into the body. One example quoted was the antibiotic vancomycin, which has rather a narrow window of effectiveness before side effects become severe – the idea would be that the blood was continuously monitored for vancomycin levels, which would then be automatically topped up when necessary.

Holmes and Roy, having decided that the complete closed loop theranostic device was too ambitious, began work to develop a microfluidic device to take a very small sample of blood from a finger prick, route it through a network of tiny pipes, and subject it to a battery of scaled-down biochemical tests. This all seems doable in principle, but fraught with practical difficulties. After three years making some progress, Holmes seems to have decided that this approach wasn’t going to work in time, so in 2007 the company switched direction away from microfluidics, and Shaunak Roy parted from it amicably.

The new approach was based around a commercial robot they’d acquired, designed for the automatic dispensing of adhesives. The idea of basing their diagnostic technology on this “gluebot” is less odd than it might seem. There’s nothing wrong with borrowing bits of technology from other areas, and reliably glueing things together depends on precise, automated fluid handling, just as diagnostic analysis does. But what this did mean was that Theranos no longer aspired to be a microfluidics/nanotech firm, but instead was in the business of automating conventional laboratory testing. This is a fine thing to do, of course, but it’s an area with much more competition from existing firms, like Siemens. No longer could Theranos honestly claim to be developing a wholly new, disruptive technology. What’s not clear is whether its financial backers, or its board, were told enough or had enough technical background to understand this.

The resulting prototype was called Edison 1.0 – and it sort-of worked. It could only do one class of tests – immunoassays, it couldn’t do many of these tests at the same time, and its results were not reproducible or accurate enough for clinical use. To fill in the gaps between what they promised their proprietary technology could do and its actual capabilities, Theranos resorted to modifying a commercial analysis machine – the Siemens Advia 1800 – to be able to analyse smaller samples. This was essential, to fulfil Theranos’s claimed USP, of being able to analyse the drops of blood from pin-pricks rather than the larger volumes taken for standard blood tests from a syringe and needle into a vein.

But these modifications presented their own difficulties. What they amounted to was simply diluting the small blood sample to make it go further – but of course this reduces the concentration of the molecules the analyses are looking for – often below the range of sensitivity of the commercial instruments. And there remained a bigger question, that actually hangs over the viability of the whole enterprise – can one take blood from a pin-prick that isn’t contaminated to an unknown degree by tissue fluid, cell debris and the like? Whatever the cause, it became clear that the test results Theranos were providing – to real patients, by this stage – were erratic and unreliable.

Theranos was working on a next generation analyser – the so-called miniLab – with the goal of miniaturising the existing lab testing methods to make a very versatile analyser. This project never came to fruition. Again, it was unquestionably an avenue worth pursuing. But Theranos wasn’t alone in this venture, and it’s difficult to see what special capabilities they brought that rivals with more experience and a longer track record in this area didn’t have already. Other portable analysers exist already (for example, the Piccolo Xpress), and the miniaturised technologies they would use were already in the market-place (for example, Theranos were studying the excellent miniaturised IR and UV spectrophotometers made by Ocean Optics – used in my own research group). In any case, events had overtaken Theranos before they could make progress with this new device.

Counting the cost and learning the lessons

What was the cost of this debacle? There was an human cost, not fully quantified, in terms of patients being given unreliable test results, which surely led to wrong diagnoses, missed or inappropriate treatments. And there is the opportunity cost – Theranos spent around $900 million, some of this on technology development, but rather too much on fees for lawyers and advertising agencies. But I suspect the biggest cost was the effect Theranos had slowing down and squeezing out innovation in an area that genuinely did have the potential to make a big difference to healthcare.

It’s difficult to read this story without starting to think that something is very wrong with intellectual property law in the United States. The original Theranos patent was astonishingly broad, and given the amount of money they spent on lawyers, there can be no doubt that other potential innovators were dissuaded from entering this field. IP law distinguishes between the conception of a new invention and its necessary “reduction to practise”. Reduction to practise can be by the testing of a prototype, but it can also be by the description of the invention in enough detail that it can be reproduced by another worker “skilled in the art”. Interpretation of “reduction to practise” seems to have become far too loose. Rather than giving the right to an inventor to benefit from a time-limited monopoly on an invention they’ve already got to work, patent law currently seems to allow the well-lawyered to carve out entire areas of potential innovation for their exclusive investigation.

I’m also struck from Carreyrou’s account by the importance of personal contacts in the establishment of Theranos. We might think that Silicon Valley is the epitome of American meritocracy, but key steps in funding were enabled by who was friends with who and by family relationships. It’s obvious that far too much was taken on trust, and far to little actual technical due diligence was carried out.

Carreyrou rightly stresses just how wrong it was to apply the Silicon Valley “fake it till you make it” philosophy to a medical technology company, where what follows from the fakery isn’t just irritation at buggy software, but life-and-death decisions about people’s health. I’d add to this a lesson I’ve written about before – doing innovation in the physical and biological realms is fundamentally more difficult, expensive and time-consuming than innovating in the digital world of pure information, and if you rely on experience in the digital world to form your expectations about innovation in the physical world, you’re likely to come unstuck.

Above all, Theranos was built on gullibility and credulousness – optimism about the inevitability of technological progress, faith in the eminence of the famous former statesmen who formed the Theranos board, and a cult of personality around Elizabeth Holmes – a cult that was carefully, deliberately and expensively fostered by Holmes herself. Magazine covers and TED talks don’t by themselves make a great innovator.

But in one important sense, Holmes was convincing. The availability of cheap, accessible, and reliable diagnostic tests would make a big difference to health outcomes across the world. The biggest tragedy is that her actions have set back that cause by many years.

Theresa May on Science and Industrial Strategy

It’s not often that a UK Prime Minister devotes a whole speech to science, so Theresa May’s speech on Monday – PM speech on science and modern Industrial Strategy – was a significant signal by itself. It’s obvious that Brexit consumes a huge amount of political and government bandwidth at the moment, so its interesting that the Prime Minister wants to associate herself with the science and industrial strategy agenda, perhaps to emphasise that Brexit is not completely all-consuming, and there is some space left for domestic policy initiatives.

Beyond the signal, though, I thought there was quite a lot of substance as well. The speech comes at an important moment – the UK government’s science and innovation funding agencies have just been through a major reorganisation, with seven discipline-based research councils, the innovation agency Innovate UK, and a body responsible for the research environment in English universities, coming together in a single organisation, UK Research and Innovation. UKRI began life on 1 April, and it launched its first major strategy document on 14th May. This document isn’t a strategy, though – it’s a “Strategic Prospectus” – a statement of some fundamental principles, together with a commitment to develop a strategy over the months to come. The PM’s speech didn’t mention UKRI at all – somewhat curiously, I thought. Nonetheless, the speech is an important statement of the direction UKRI will be expected to take.

Perhaps the most important commitment was the PM’s stress on the 2.4% R&D intensity target, together with a recognition that this needs a substantial increase in private sector funding, catalysed by state investment. I’ve already written about how stretching this target will be. My estimate is that it will need a £7 billion increase in government spending by 2027 – going well beyond the £2.2 billion increase to 2021 already announced – and a £14 billion increase in private sector spending. These are big numbers; to achieve them will require a significant shifting of the UK’s economic landscape (for the better, I believe). My suspicion is that the attempt to achieve them will have a very big influence on the way UKRI operates, perhaps a bigger influence than people yet realise.

Another important departure from science and innovation policy up to now was the insistence that everywhere in the UK should benefit from it – “backing businesses and building infrastructure not just in London and the South East but across every part of our country”. This needs to include those cities that were pioneers of innovation in the 19th century, but which since have suffered the effects of deindustrialisation. This of course speaks to the profound regional imbalances in R&D expenditure that I highlighted in my recent blogpost Making UKRI work for the whole UK.

Can old cities find new economic tricks? The PM’s speech pointed to some examples – from jute to video games in Dundee, fish to offshore wind in Hull, coal to compound semiconductors in Cardiff. Cities and regions do need to specialise. The PM’s speech didn’t mention any specific mechanisms for encouraging and supporting this, but on the same day UKRI announced a new “place-based” funding competition – the “Strength in Places Fund”, which represents a valuable first step. The aim is to develop interventions on a scale to make a material difference to local and regional economies, and it’s great news that the very distinguished economist Dame Kate Barker – who chaired the Industrial Strategy Commission – has been persuaded to chair the assessment panel.

One interesting section of the speech went some way to sharpening up thinking about “Grand Challenges” and “Missions” as organising principles for research. Last November’s Industrial Strategy White Paper introduced four “Grand Challenges” for the UK – on AI and data, the future of mobility, clean growth, and the ageing society. Within these “Grand Challenges”, the intention is to define more specific “missions”. These are more concrete than the Grand Challenges, with some hard targets. Politicians like to announce targets, because they make good headlines – artificial intelligence is “to help prevent 22,000 cancer deaths a year by 2033”, according to the BBC’s (somewhat inaccurate) trail of the speech . I think targets are helpful because they focus policy-makers’ minds on scale. It’s been a besetting sin of UK innovation policy to identify the right things to do, but then to execute them on a scale wholly inadequate to the task.

The PM’s speech announced four such “missions”, and I thought these examples were quite good ones. Two of these missions – around better diagnostics, and support for independent living for older people – are good examples of putting innovation for health and social care at the centre of industrial strategy, as the Industrial Strategy Commission recommended in its final report. Some questions remain.

One simple question that there needs to be an answer for when we talk about “mission-led innovation” is this – who will be the customer for the products that the innovation produces? The government – or the NHS – is the obvious answer for this question in the context of health and social care – but obvious doesn’t mean straightforward. We’ve seen many years of people (including me) saying we should use government procurement much more to drive innovation, to depressingly little effect. What this highlights is that the problems and barriers are as much organisational and cultural as technological. I’m entirely prepared to believe that machine learning techniques could potentially be very helpful in speeding up cancer diagnoses, but realising their benefits will take changes in organisation and working practises.

The other outstanding issue is how these “Grand Challenges” and “missions” are chosen. Is it going to be on the basis of which businessman last caught the ear of a minister, or what a SPAD read in the back of the Economist this week? On the contrary, one would hope – these decisions would be made by aggregating the collective intelligence of a very diverse range of people. This should include scientists, technologists and people from industry, but also the people who see the problems that need to be overcome in their everyday working lives. The UKRI Strategic Prospectus talks warmly about the need for public engagement, including the need to “listen and respond to a diverse range of views and aspirations about what people want research and innovation to do for them”. This needs to be converted into real mechanisms for including these voices in the formulation of these grand challenges and missions.

The Prime Minister may have wanted to give a speech about science to get away from Brexit for a few minutes, but of course there’s no escape, and the consequences of Brexit on our science and innovation system are uncertain and likely to be serious. So it was important and welcome that the PM – particularly this Prime Minister, and former Home Secretary – spelt out the importance of international collaboration, the huge contribution of the many overseas scientists who have chosen to base their careers and lives in the UK, and the value that overseas students bring to our universities. But warm words are not enough, and there needs to be a change in climate and culture in the Home Office on this issue.

One place where we have heard many warm words, but as yet little concrete action, has been in the question of the future relationship between the UK and the EU’s research and innovation programmes. Here it was enormously welcome to hear the Prime Minister state unequivocally that the UK wishes to be fully associated with the successor to Horizon 2020, and is prepared to pay for that. Stating a wish doesn’t make it happen, of course, but it is huge progress to hear that this is now a negotiating goal for the UK government. This is one goal that should be politically unproblematic, and should benefit both sides, so we must hope it can be realised.

The Second Coming of UK Industrial Strategy

I have a longish piece in the Winter issue of the US science policy journal “Issues in Science and Technology”, which aims to place our current debates about industrial strategy in the UK in a longer historical context. This is now online here: The Second Coming of UK Industrial Strategy. Here is the introduction to the article:

The United Kingdom dismantled industrial policies in the 1980s; today it must rebuild them to create a social-industrial complex.

Industrial strategy, as a strand of economic management, was killed forever by the turn to market liberalism in the 1980s. At least, that’s how it seemed in the United Kingdom, where the government of Margaret Thatcher regarded industrial strategy as a central part of the failed post-war consensus that its mission was to overturn. The rhetoric was about uncompetitive industries producing poor-quality products, kept afloat by oceans of taxpayers’ cash. The British automobile industry was the leading exhibit, not at all implausibly, for those of us who remember those dreadful vehicles, perhaps most notoriously exemplified by the Austin Allegro.

Meanwhile, such things as the Anglo-French supersonic passenger aircraft Concorde and the Advanced Gas-cooled Reactor program (the flagship of the state-controlled and -owned civil nuclear industry) were subjected to serious academic critique and deemed technical successes but economic disasters. They exemplified, it was argued, the outcomes of technical overreach in the absence of market discipline. With these grim examples in mind, over the next three decades the British state consciously withdrew from direct sponsorship of technological innovation.

In this new consensus, which coincided with a rapid shift in the shape of the British economy away from manufacturing and toward services, technological innovation was to be left to the market. The role of the state was to support “basic science,” carried out largely in academic contexts. Rather than an industrial strategy, there was a science policy. This focused on the supply side—given a strong academic research base, a supply of trained people, and some support for technology transfer, good science, it was thought, would translate automatically into economic growth and prosperity.

And yet today, the term industrial strategy has once again become speakable. The current Conservative government has published a white paper—a major policy statement—on industrial strategy, and the opposition Labour Party presses it to go further and faster.

This new mood has been a while developing. It began with the 2007-8 financial crisis. The economic recovery following that crisis has been the slowest in a century; a decade on, with historically low productivity growth, stagnant wage growth, and no change to profound regional economic inequalities, coupled with souring politics and the dislocation of the United Kingdom’s withdrawal from the European Union, many people now sense that the UK economic model is broken.

Given this picture, several questions are worth asking. How did we get here? How have views about industrial strategy and science and innovation policy changed, and to what effect? Going forward, what might a modern UK industrial strategy look like? And what might other industrialized nations experiencing similar political and economic challenges learn from these experiences?

Read the rest of the article here.

Making UK Research and Innovation work for the whole UK

The first of April saw the formal launch of UK Research and Innovation, the new body which will be responsible for the bulk of public science and innovation funding in the UK. All seven research councils, the innovation funding agency InnovateUK, and the research arm of the body formerly responsible for university funding in England, to be renamed Research England, have been folded into this single body, with a budget of more than £6 billion a year.

Expectations for this new body are very high. Its formation has been linked to the government’s decision to increase research funding substantially, with extra funding rising to £2.3 billion by 2021/2. This new money is explicitly linked to the need to increase the productivity of the UK economy. The government has also committed to a 10-year target of raising the overall R&D intensity of the economy from its current 1.7% to 2.4% of GDP. As I’ve discussed earlier, this is a very challenging target that will require a major change in behaviour from the UK”s private sector, as well as substantial increases in public sector R&D. UKRI has the task of ensuring that the extra public sector investment is made in ways that maximise increases in private sector R&D. All of this, of course, takes place with the background of Brexit, and the need for the UK to rebuild its business model.

There’s one factor that, unless urgently addressed, will hold UKRI back from its mission of making a significant difference to the UK’s overall productivity problems and raising the economy’s R&D intensity. That is the extraordinary and unhealthy concentration of publicly funded R&D in a relatively small part of the country – London, the Southeast, and East Anglia. Entirely uncoincidentally, these are the parts of the country with the most productive economies already. As the Industrial Strategy Commission (of which I was a member) stressed in its final report last year, unless the UK fixes its gross regional economic disparities it will never be able to prosper.

No-one has done more to bring attention to the UK’s unbalanced R&D geography than Tom Forth; everyone should read his recent article on how we should use the increase in funding to redress the balance. One key point that Tom has stressed is the geographical relationship – or lack of it – between public and private sector research. Industry spends roughly twice as much as the government on research, so reaching the 2.4% R&D intensity target will not be possible without major increases in private sector spending – roughly £14 billion a year, by my estimate. Yet classical economics tells us that firms will always underinvest in R&D, because they are unable to capture the full economic benefit of their spending, much of which “spills over” to benefit the rest of the economy. That’s the logic which convinces even HM Treasury that the state ought to support R&D.

Yet Tom Forth’s work – especially his plots of of public against private R&D spending – shows how badly government spending on R&D is matched to the demand of industry, as measured by where industry actually invests its own money.

R&D funding in the business and non-business sectors (government, higher education and charity), by NUTS2 regions. 2014 figures, by sector of performance, from Eurostat.

My first graph shows how unbalanced the UK’s research landscape is. This shows R&D spending – both private and public – broken down sub-regionally. The first thing that’s obvious is the dominance of three sub-regions – Oxford and its environs, Cambridge and its sub-region, and (part of) London – inner West London. These three subregions – out of a total of 40 in the UK – account for 31% of total UK R&D spending, and an even higher fraction of spending in the government, HE and charity sectors – 41%.

There is a striking difference between these three sub-regions, though, and that is their split between public and private R&D (taking public R&D here to mean R&D carried out in government-owned laboratories, universities and the non-profit sector). Overall in the UK, the value of business R&D stands at 1.89 times the value of public R&D. East Anglia – dominated by Cambridge – does even better than this, with private sector R&D coming in at more than twice public sector R&D. This is a science-based cluster that works, with high levels of public R&D being rewarded at above average rates by private sector R&D.

The Oxford, Berks and Bucks sub-region does slightly less well in converting its very large public investment into private R&D, with a multiplier a little below the national average at 1.72.

The real anomaly, though, is Inner London (West). This single region receives by far the largest amount of public R&D spending of any single region – nearly 20% of the entire public funding for R&D. Yet the rate of return on this, in terms of private sector R&D, is only 0.46, far below the national average.

Working down the list, we find 9 sub-regions with respectable levels of total R&D. These include Bristol, Hampshire, Derby, Bedford, Surrey and the West Midlands, Worcestershire and Cheshire. With the exception of East Scotland, all these sub-regions are characterised by above average ratios of private to public sector R&D. Two sub-regions stand out for significant private sector R&D and almost no public sector activity – Cheshire, with its historic concentration of chemical and pharmaceutical industries, and Warwickshire, Herefordshire and Worcestershire.

Then we come to the long tail, with much lower investment in R&D, either public or private. All of Wales, Northern Ireland, the North of Engand, Southwest England beyond Bristol, outer east and southeast London and Kent, Lincolnshire – this is pretty much a map of left-behind Britain.

This brings us to the question: why should we worry about regional disparities in R&D? Let’s put aside for the moment any question of fairness – what it comes back to is productivity. If we plot R&D intensity against regional GDP, we find quite a respectable correlation. (Slightly to my surprise, the correlation is strongest if you plot total R&D – both public and private, rather than just business R&D. I’ve ommitted London entirely as it is such an outlier on both measures).

Sub-regional GDP per capita against total R&D (public and private) per capita. 2014 data, from Eurostat.

Of course, the relationship is not a straightforward one: higher R&D intensities are associated with the presence of high-productivity firms at the technology frontier, there may be a more general effect of higher skill levels associated with regions with higher R&D, and there may well be other factors, direct and indirect, at play too. We can also see outliers. NE Scotland has very high productivity but mid-level R&D investment, not doubt because of the importance of the oil industry, while East Anglia has relatively low productivity given the very high levels of R&D. I suspect the latter is associated with the relatively concentrated nature of the Cambridge cluster, whose effect doesn’t really penetrate far into a large and relatively poor rural and coastal hinterland.

As the Industrial Strategy Commission argued, just as a matter of arithmetic averages, we cannot expect the productivity of the country as a whole to grow if such a large fraction is structurally lagging. So what should we do? Different places demand different approaches, and Tom Forth has some suggestions.

On London, I agree with Tom – the over-investment of public R&D in London is a grotesque misallocation of public resources, but the realities of political economy mean it’s likely to stay that way. At the least, we certainly need to stop building new institutions there. One thing the data does highlight, though, is how concentrated even within London the investment is, so initiatives like UCL East, aiming to spread the benefits of the investment to less favoured parts of the capital, are to be welcomed.

The eight sub-regions with high private sector R&D and public underinvestment offer a clear rationale for further public investment. One needs to be aware of the arbitrary nature of these sub-regional boundaries – the strength of the Cheshire cluster is a very good reason to have a strong Chemistry department in the University of Liverpool, for example – but these regions are likely to provide some very strong investment cases for following the private sector money to support existing clusters.

I somewhat disagree with Tom on the case of Oxford and Cambridge (and here of course my biases may be showing). Tom believes that funding should be frozen in these places until they agree to allow more growth. As far as Cambridge is concerned, I’m not sure this is quite right – there seems to be a huge amount of growth happening in that city at the moment, with significant numbers of new-build apartments going up in the city, and a string of new suburbs like Eddington being built around its fringes, complete with new schools and supermarkets. The issue here is the completely inadequate transport infrastructure to get into and around the city; it’s these infrastructure problems that are stopping the further growth of what is the UK’s most successful science-based cluster, and are stopping the spread of its benefits to its less prosperous hinterland.

But this leaves the bigger problem – how can public R&D investments be used to raise productivity and economic growth in what is the majority of the country, where levels of R&D investment, both public and private, are far too small? It’s easy to imagine arbitrary and ill-thought through investments imposed on regions with no understanding of the potential their economic history and current industrial base will support. But for a counter-example, in my own city, Sheffield, I think there has been a well thought through policy based on promoting advanced manufacturing through investments in translational research and skills, which is bearing fruit through the twin routes of the attraction of inward investment from firms at the technological frontier and improving the performance of the existing business base.

Where does this leave the new organisation, UK Research and Innovation, whose job should be to rectify these issues? It’s not going to be easy, given that the cultures of the organisations UKRI are being built from have been positively opposed to place-based policy. The research councils have focused on “research excellence” as the sole criterion for funding, while the policy of InnovateUK has been to be led by industry. But “place-blind” policies inevitably lead to research concentration, though the well-known Matthew effect. I know that EPSRC at least has been seriously grappling with these issues in the last couple of years, while there is real expertise in Research England on the economic potential of universities in their cities and regions, so there is something to build on.

The early actions of the new organisation are not encouraging. The move back from Swindon to London is a retrograde step, suggesting that UKRI’s first priority is keeping ministers happy and keeping ahead of Whitehall office politics. When the Technology Strategy Board (the predecessor organisation to InnovateUK) was first set up as a free-standing funding agency, it was moved out of London to Swindon to emphasise that it served business, not Whitehall, and it was the better for it.

Moreover, the main board of UKRI is conspicuous for its lack of geographical diversity – out of 16 members, just one – Aberdeen’s Ian Diamond – is from outside London and the southeast. Allowing this situation to arise was a telling and worrying oversight.

One urgent concrete step UKRI should take is to create a high-level advisory board to hold its feet to the fire on a plan to rebalance R&D expenditure across the country. UKRI is to produce a strategy within the next month or two, and it is to be hoped that addressing the regional balance issue forms a central part of this strategy. This board should include senior representatives from the Devolved Administrations, and economic development leads from the metro mayors’ offices and combined authorities in the regions of England. At an operational level, UKRI needs to get its staff out of their London home, perhaps with regional specialists seconded to economic development units in the regions and nations.

The stakes here are high. UKRI has been set up with great expectations – the substantial injection of extra research spending, the 2.4% R&D target are signals that the government expects UKRI to deliver. If it does not produce tangible, positive effects on the wider economy – across the whole of the UK – UKRI will rightfully be judged to have failed – it will have failed the country, and it will have failed UK science. Let’s hope it can rise to the challenge.