This is the outline of a brief talk I gave as part of the launch of a new Research on Research Institute, with which I’m associated. The session my talk was in was called “PRIORITIES: from data to deliberation and decision-making . How can RoR support prioritisation & allocation by governments and funders?”
I want to focus on the idea of scientific productivity – how it is defined, and how we can measure it – and whether it is declining – and if it is, what can we do about it?
The output of science increases exponentially, by some measures…
…but what do we get back from that? What is the productivity of the scientific enterprise – the output of the enterprise, as defined by some measure of the output of science per unit input?
It depends on what we think the output of science is, of course.
We could be talking of some measure of the new science being produced and its impact within the scientific community.
But I think many of us – from funders to the wider publics who support that science – might also want to look outside the scientific community. How can we measure the effectiveness with which scientific advances are translated into wider socio-economic goals? As the discourses of “grand challenges” and “mission driven” research become more widely taken up, how will we tell whether those challenges and missions have been met?
There is a gathering sense that the productivity of the global scientific endeavour is declining or running into diminishing returns. A recent article by Michael Nielsen and Patrick Collison asserted that “Science is getting less bang for its buck”, while a group of distinguished economists have answered in the affirmative their own question: “Are ideas getting harder to find?” This connects to the view amongst some economists, that we have seen the best of economic growth and are living in a new age of stagnation.
Certainly the rate of innovation in some science-led industries seems to be slowing down. The combination of Moore’s law and Dennard scaling which brought us exponential growth in computing power in the 80’s and 90’s started to level off around 2004 and has since slowed to a crawl, despite continuing growth in resources devoted to it.
If we measure the productivity of R&D in the pharmaceutical industry as the number of new drugs produced by billion dollars of expenditure, that productivity has been exponentially falling for decades.
What of the impact of R&D expenditure on the wider economy? We expect more R&D to lead to more innovation and more innovation to lead to economic growth. But the general economic backgrounds – across the developed world, but perhaps worse in the UK than in competitors (apart from Italy), total factor productivity – (regarded as a measure of innovation in its widest sense) has stalled since the global financial crisis. Stalling productivity growth has led to stalling wage growth – this directly feeds into people’s living standards, almost certainly contributing to the sour political times we live in.
We don’t just expect science and technology to make us richer – we hope that it helps us to lead longer and healthier lives. And here too progress has been stalling.
So I think there is a case that the productivity of science at the most macro level – if we measure it in terms of economic outcomes and some measures of well-being – is faltering.
How can the choices funders, institutions, and individual scientists affect this? That, I would argue, needs to be a central theme of research on research.
The first thing to recognise is that collectively, choices are being made, even if there’s no individual mastermind in charge of the whole enterprise.
Here are two examples of choices that the UK has made.
Firstly, we’ve seen a growing primacy of health as the goal of publically funded research. As we’ve seen, it’s not obvious that this emphasis has yielded the desired results in better health outcomes.
Secondly, we have chosen to concentrate research geographically – in London, Oxford and Cambridge. These are the most productive parts of a country which is enormously regionally unbalanced in terms of its economic performance. At the very least we can say that this concentration of research doesn’t help the left-behind regions catch up economically. I’d go further and say that the lack of diversity in where publicly funded science is done is actually a challenge both to its legitimacy and its effectiveness.
I think the question of what science is for becomes, in difficult times, more and more challenging.
The discourse of “grand challenges” and “missions” becomes more pressing. And who could disagree with the idea that decarbonising our energy systems, allowing everyone to have long and healthy lives, and spreading the economic benefits of science as widely as possible – both between nations and within them – should be, if not the only, but a big part of the reason why we – as funding bodies and society more widely – support and fund science.
But we do need to make sure that the priorities we select and the choices we make are the ones that lead most effectively to the delivery of these missions.
In the piece of work James Wilsdon and I did last year – “The Biomedical Bubble” – we began to ask some questions about how effective we are at setting research priorities which translate into the best outcomes in one particular area – health related research. This was undoubtedly a preliminary effort, but for us it exemplified what should be an important strand of the Research on Research agenda.
How do we make choices in a way that helps the scientific enterprise most effectively meet the expectations society puts onto it? I believe the Research on Research agenda needs to embrace these wider questions. We need to make explicit what outcomes we expect from science, we need to make explicit the choices we make, and we need to define the many dimensions of scientific productivity so we can do our best to improve them.