The UK’s post-financial crisis stagnation in productivity finally hit the headlines this month. Before the financial crisis, productivity grew at a steady 2.2% a year, but since 2009 growth has averaged only 0.3%. The Office of Budgetary Responsibility, in common with other economic forecasters, have confidently predicted the return of 2.2% growth every year since 2010, and every year they have been disappointed. This year, the OBR has finally faced up to reality – in its 2017 Forecast Evaluation Report, it highlights the failure of productivity growth to recover. The political consequences are severe – lower forecast growth means that there is less scope to relax austerity in public spending, and there is little hope that the current unprecedented stagnation in wage growth will recover.
Are the economists to blame for not seeing this coming? Aditya Chakrabortty thinks so, writing in a recent Guardian article: “A few days ago, the officials paid by the British public to make sure the chancellor’s maths add up admitted they had got their sums badly wrong…. The OBR assumed that post-crash Britain would return to normal and that normal meant Britain’s bubble economy in the mid-2000s. This belief has been rife across our economic establishment.”
The Oxford economist Simon Wren-Lewis has come to the defense of his profession. Explaining the OBR’s position, he writes “Until the GFC, macro forecasters in the UK had not had to think about technical progress and how it became embodied in improvements in labour productivity, because the trend seemed remarkably stable. So when UK productivity growth appeared to come to a halt after the GFC, forecasters were largely in the dark.”
I think this is enormously unconvincing. Economists are unanimous about the importance of productivity growth as the key driver of the economy, and agree that technological progress (sufficiently widely defined) is the key source of that productivity growth. Why, then, should macro forecasters not feel the need to think about technical progress? As a general point, I think that (many) economists should pay much more attention both to the institutions in which innovation takes place (for example, see my critique of Robert Gordon’s book) and to the particular features of the defining technologies of the moment (for example, the post-2004 slowdown in the rate of growth of computer power).
The specific argument here is that the steadiness of the productivity growth trend before the crisis justified the assumption that this trend would be resumed. But this assumption only holds if there was no reason to think anything fundamental in the economy had changed. It should have been clear, though, that the UK economy had indeed changed in the years running up to 2007, and that these changes were in a direction that should have at least raised questions about the sustainability of the pre-crisis productivity trend.
These changes in the economy – summed up as a move to greater financialisation – were what caused the crisis in the first place. But, together with broader ideological shifts connected with the turn to market liberalism, they also undermined the capacity of the economy to innovate.
Our current productivity stagnation undoubtedly has more than one cause. Simon Wren-Lewis, in his discussion of the problem, has focused on the effect of bad macroeconomic policy. It seems entirely plausible that bad policy has made the short-term hit to growth worse than it needed to be. But a decade on from the crisis, we’re not looking at a short-term hit anymore – stagnation is the new normal. My 2016 paper “Innovation, research and the UK’s productivity crisis” discusses in detail the deeper causes of the problem.
One important aspect is the declining research and development intensity of the UK economy. The R&D intensity of the UK economy fell from more than 2% in the early 80’s to a low point of 1.55% in 2004. This was at a time when other countries – particularly the fast-developing countries of the far-east – were significantly increasing their R&D intensities. The decline was particularly striking in business R&D and the applied research carried out in government laboratories; for details of the decline see my own 2013 paper “The UK’s innovation deficit and how to repair it”.
What should have made this change particularly obvious is that it was, at least in part, the result of conscious policy. The historian of science Jon Agar wrote about Margaret Thatcher’s science policy in a recent article
“The curious history of curiosity driven research”. Thatcher and her advisors believed that the government should not be in the business of funding near-market research, and that if the state stepped back from these activities, private industry would step up and fill the gap: “The critical point was that Guise [Thatcher’s science policy advisor] and Thatcher regarded state intervention as deeply undesirable, and this included public funding for near-market research. The ideological desire to remove the state’s role from funding much applied research was the obverse of the new enthusiasm for ‘curiosity-driven research’.”
But stepping back from applied research by the state coincided with a new emphasis on “shareholder value” in public companies, which led industry to cut back on long-term investments with uncertain returns, such as R&D.
Much of this outcome was predictable in economic theory, which predicts that private sector actors will underinvest in R&D due to their inability to capture all of its benefits. Economists’ understanding of innovation and technological change is not yet good enough to quantify the effects of these developments. But, given that, as a result of policy changes, the UK had dismantled a good part of its infrastructure for innovation, a permanent decrease in its potential for productivity growth should not have been entirely unexpected.
The Office of Budgetary Responsibility’s Chart of Despond. From the press conference slides for the October 2017 Forecast Evaluation Report.