The white heat of technology vs the cronut economy: two views on the productivity slowdown

A review of two books on innovation:

  • Windows of Opportunity: how nations create wealth, by David Sainsbury
  • Fully Grown: why a stagnant economy is a sign of success, by Dietrich Vollrath

  • As I write, the world economy is in a medically induced coma, as governments struggle to deal with the effects of the Covid-19 pandemic. But not everything was rosy in the developed world’s economies before the pandemic; the long term picture was one of declining labour productivity leading to stagnating living standards. Even after the pandemic has passed these problems will remain. These two books highlight the problem of falling productivity, but take diametrically opposing views about what’s caused the problem, and indeed on whether it is a problem at all.

    Where does productivity growth come from? An obvious answer is the development of new technologies. The late medieval invention of the blast furnace increased the amount of iron a man could produce a day by about a factor of 10. In the 18th century Richard Arkwright invented the water frame, and a single machine in his factory could do the work of tens or hundreds of spinners of yarn working at home. More recently, we’ve seen the work of scores of clerks, calculators and typists being replaced by inexpensive computers.

    But Dietrich Vollrath cautions us against equating productivity growth with technology: “From the perspective of economic growth, the word technology doesn’t mean anything. There is productivity growth, and that’s it.” At the centre of Vollrath’s book is an eloquent exposition of what’s become the mainstream economic theory of growth, originating with the work of Robert Solow, leading the the counterintuitive, but essentially comforting, conclusion that the slowdown in productivity we are living through is a sign of success, not failure.

    Vollrath’s book is a pleasure to read. It contains the clearest explanations I’ve ever read of the central concepts of growth accounting, such as what’s meant by “constant returns to scale”, and the significance of the Solow residual. His highlighting of the effect of demographic changes on productivity growth in the USA is illuminating and convincing (though of course this is USA centred and other countries will have different experiences). Yet I think it is too quick to dismiss the possibility that the slowdown in productivity growth we’ve seen in developed countries across the world is related to a real slow down in the rate of technological progress.

    David Sainsbury, unlike Dietrich Vollrath, is not an academic economist. As a former UK Science Minister, he looks to economic theory as a guide to policy, and he doesn’t like what he sees. To Sainsbury, the Solow theory, and its later elaborations, are bound to fail, because they fail to appreciate the complexity and heterogeneity of production in the modern world – in these theories, “it doesn’t matter whether a firm is producing potato chips or microchips”. The aim of Sainsbury’s book is to “look more closely at why neoclassical growth theory has proved such a poor guide to policy makers seeking to increase the growth rates of their countries, and why it is of so little use in explaining the growth performance of countries”.

    For Sainsbury, the key to economic growth is to be found at the level of firms – “a nation’s standard of living depends on the ability of its firms to attain a high and rising level of value-added per capita in the industries in which they compete”. Firms can do this by innovating to develop process improvements which drive up their productivity compared to their rivals. Or they can identify new market opportunities that open up as a result of technological developments.

    These technological opportunities are uneven – at any given time, one industry may be seeing dramatic increases in technological change (for example the ICT industry in the second half of the twentieth century), while other industry sectors may be relatively stagnating. The crucial trick is to identify those sectors where technological capabilities, together with matching market opportunities, open up the “windows of opportunity” of the book’s title.

    For Paul Romer and subsequent economists, what’s important for innovation is market power. As Vollrath discusses, market power is required for a firm to be able to innovate, because without market power the firm cannot charge the mark-ups it needs to compensate for the costs of innovation. “Without mark-ups there is no incentive to invest in R&D… Without R&D there are no non-rival innovations. And without non-rival innovations, there is no productivity growth.”

    In Vollrath’s account, market power can arise from government intervention, particularly through the assignment of intellectual property rights – the time-limited legal monopoly granted companies to profit from their inventions. It can also arise through the difficulty of reproducing manufacturing processes, because of the tacit knowledge inherent in them. But too much market power can limit innovation, too. As patent law in the USA has changed, more and more trivial innovations have become patentable, while the existence of “patent troll” firms, whose entire business model consists of suing firms for infringing their patent portfolio, demonstrates that too-lax intellectual property rights can lead to unproductive rent-seeking as well as innovation. For Vollrath, permissive patenting and a weakening of competition law have probably pushed the USA beyond the point at which too much market power leads to diminishing returns.

    What about the role of the government? For Vollrath, the government’s main role is to tax and regulate, and in a rather unexciting chapter he concludes that there’s no real evidence that over-taxation or over-regulation has had a material effect on productivity growth either way. The role of the government in driving innovation is entirely omitted.

    But governments have a crucial role here. The US government spent $121 billion on R&D in 2017 – and that wasn’t just academic research in universities; $24 billion worth of R&D carried out in companies was directly paid for by the federal government. I’ve discussed before (in my post “The semiconductor industry and economic growth theory”) how crucial government investment was in creating the semiconductor industry.

    Unsurprisingly, Sainsbury, as a former science minister, has a lot more to say about the way government spending on R&D can underpin a wider innovation system, identifying a fall of federal funds for research as a share of GDP as one factor underlying the USA’s declining innovation performance. The sections in his book on sectoral, national, regional and city innovation systems, carry both the positives and negatives of being written by a policy insider – very well informed, but with an occasional sense of defending the writer’s record in office. Sainsbury’s chapter on skills, though, is outstanding, reflecting the attention he and his foundation have given this important topic since leaving his government role.

    The neglect of government’s role in R&D in Vollrath’s book is consistent with his wider tendency to downplay technological innovation as a source of productivity growth. Instead, at the centre of his argument, is the idea that the productivity slowdown has arisen largely as a result of an economic shift from manufacturing to services, and that this is a good thing. Manufacturing tends to have faster productivity growth than services, so if more of the economy moves towards services, then necessarily average productivity growth will fall. But, to Vollrath, this represents the outcomes of rational choices by consumers, the natural and positive outcome of a fully grown economy.

    To understand this switch, we need to look to the work of the economist William Baumol. As I discussed in a previous post (“A toy model of Baumol’s cost disease”), Baumol introduced the important (but misleadingly named) concept of “cost disease”. If an economy has two sectors, one with fast productivity growth (for example in manufacturing) and another with much slower or non-existent growth (typically in services), then the sector with slower productivity growth will become relatively more expensive. It’s plausible to suggest that people will respond to this, in the context of the general increase in prosperity resulting from higher productivity in manufacturing, by buying more services, despite their greater relative cost. Hence there’s a tendency for the economy to become more weighted (by the value of their output) to services.

    Of course, this process has going on for centuries. Huge increases in the productivity with which we can produce of food, simple manufactured goods like textiles and homewares, and successively more technologically complex goods like cars and consumer electronics, mean that their prices have collapsed relative to personal services. Vollrath’s argument is that this process reached some kind of critical point in the year 2000: “what changed in 2000 was that the share of economic activity [of services] had reached such a high level that the drag on productivity growth from this shift finally became tangible.” There doesn’t seem to be a lot of evidence to support this particular timing.

    But there’s one important feature of Baumol’s argument that doesn’t emerge clearly at all in Vollrath’s book: that’s the way in which Baumol’s mechanism effectively transfers value from sectors with high productivity growth to sectors to sectors with low productivity growth. To illustrate this, let’s look at Vollrath’s prime example of an innovation not dependent on high technology, that has nonetheless raised productivity – the Cronut. For those of us outside the USA, I need to explain that a Cronut is a new kind of bun invented in New York, consisting of a deep-fried torus of croissant dough (the estimable British bakery chain Gregg’s trialed a similar confection in the UK, but it didn’t catch on). “I don’t know if Cronuts count as technology, but I do know they raised productivity because they led people to put a higher value on a given set of raw inputs”.

    It’s worth thinking through where this higher value comes from. We need to begin by being precise about what we mean by productivity. A non-economist might think of productivity in terms of the number of cronuts a worker might produce a day. This is the kind of productivity that can be increased by automation. Croissant dough consists of a laminate of many layers of yeast-leavened bread dough separated by butter, quite labour-intensive to make by hand, but using a mechanical dough-sheeter would greatly increase a worker’s output. To an economist like Vollrath, it isn’t this kind of output productivity that’s being talked about, though. For an economist, productivity is measured in terms of the money value of the output. If you run a small bakery, and you increase your output tenfold by installing a dough-sheeter, as long as you have a market to sell your increased output at the same price, you have increased both types of productivity – you produce more cronuts, and you make more money.

    But in the long term, and over whole economies, output productivity and money productivity don’t behave in the same way, because of Baumol’s cost disease mechanism. One might suspect that our New York artisanal cronut makers resist the lure of industrial dough-sheeters and the like, and rely on the same technologies that their nineteenth century antecedents did. Although the output productivity of their baked and deep fried goods would be unchanged, the real money value of what they produce would be greater, just because of Baumol’s cost disease.

    To the extent that patisserie has seen low growth in its output productivity since the 19th century, while there have been order of magnitude increases in the number of motor cars or record players or washing machines produced by a single worker, the artisanal patisserie sector will have been affected by Baumol’s cost disease. This will have raised the relative price of cronuts compared to a basket of other manufactured products, whose sectors have seen much bigger productivity increases. Thus the reason that cronuts cost more in 21st century New York than they would have in 19th century Paris (where the technology to make them certainly existed) is because of the 20th century revolution in productivity in other sectors.

    So, one very effective way to increase money productivity in sectors with low output productivity growth is to increase the output productivity growth in some other sector. It’s not so much that a rising tide lifts all boats, but that the leading sectors pull everything else along behind them. For this reason, I think Vollrath underestimates the importance of sectors seeing rapid growth in output productivity – the very sectors that Sainsbury stresses one should support and emphasise in his book.

    It is, of course, unfortunate that Vollrath has written an essentially optimistic book about the economy that’s been released precisely at the moment of a historically unprecedented economic downturn. But there is a much more serious omission.

    There’s not a single mention in the book of the problem of climate change, or the challenge of transitioning a world economy that depends on fossil fuels to low carbon energy sources. In talking about the inputs to economic growth, Vollrath says “we could also consider the stocks of natural resources, but these are bit players in the story”. This comment is very telling.

    Energy is relatively very much cheaper now than it was a few hundred years ago. The technology of extracting fossil fuels has allowed many more units of energy to be extracted for a given set of inputs – most recently, for example, in the fracking revolution that has transformed the USA’s energy economy. So, following Baumol’s principle, the relative price of energy has fallen.

    But this doesn’t mean the relative importance of energy has dropped with the price – as we will find out if we have to do without it. If we don’t find – through large scale technological innovation – zero carbon alternative sources of energy at lower cost to fossil fuels, we will either have to suffer the loss of living standards – and indeed loss of life – that will follow from unmitigated climate change, or we will have to accept that economic growth will go into reverse. Energy prices will increase and we will all be worse off.

    In fact, Vollrath doesn’t just underestimate the role of technological innovation in growth up to now, he’s actually positively sceptical about whether we need any more: “given our current life expectancy and living standards the risks inherent in any technology … may not be worth pursuing just to add a fraction of a percentage point to the growth rate”. On this, I think he could not be more wrong. We urgently need new technology, not to add a percentage point to the growth rate, but precisely so we can maintain our current life expectancy and living standards – not to mention allow the rest of the world to enjoy what we, in rich countries, take for granted.

    A toy model of Baumol’s cost disease

    I’ve recently read Dietrich Vollrath’s book “Fully Grown: why a stagnant economy is a sign of success”. It’s interesting and well-written, though I’m not entirely convinced by the conclusion that the sub-title summarises. I’ll write more about that later, but it did prompt me to think more about Baumol’s cost disease, something I’ve written about in an earlier post: How cheaper steel makes nights out more expensive (and why that’s a good thing).

    In this well-known phenomenon, a differential in productivity growth rate between goods and services leads both to the cost of services relative to goods increasing, and to services taking a larger share of the economy. It’s this shift of the economy from high-productivity-growth goods manufacturing into low-productivity-growth services that Vollrath ascribes part of our current growth slowdown to, and he thinks this is entirely positive.

    Vollrath introduces a simple toy model to think about Baumol’s cost disease. It’s simple enough to express this mathematically, but when you do this it produces some apparently paradoxical results. I think reflecting on these paradoxes can give some insight into the difficulties of measuring growth in an economy in which one sector advances much faster than another. As I’ve written before, this highly uneven technological progress is very characteristic of our economy, where, for example, we’ve seen orders of magnitude increase in computing power in the last century, while in other sectors, like construction, little has changed. For the mathematical details, see these notes (PDF) – here I summarise some of the main results. Continue reading “A toy model of Baumol’s cost disease”

    UK ARPA: An experiment in science policy?

    This essay was published yesterday as part of a collection called “Visions of ARPA”, by the think-tank Policy Exchange, in response to the commitment of the UK government to introduce a new science funding agency devoted to high risk, high return projects, modelled on the US agency DARPA (originally ARPA). All the essays are well worth reading; the other authors are William Bonvillian, Julia King (Baroness Brown), two former science ministers, David Willetts and Jo Johnson, Nancy Rothwell and Luke Georghiou, and Tim Bradshaw. My thanks to Iain Sinclair for editing.

    The UK’s research and innovation funding agency – UKRI – currently spends £7 billion a year supporting R&D in universities, public sector research establishments and private industry [1]. The Queen’s Speech in December set out an intention to increase substantially public funding for R&D, with the goal of raising the R&D intensity of the UK economy – including public and private spending – from its current level of 1.7% of GDP to a target of 2.4%. It’s in this context that we should judge the Government’s intention to introduce a new approach, providing “long term funding to support visionary high-risk, high-pay off scientific, engineering, and technology ideas”. What might this new approach – inevitably described as a British version of the legendary US funding agency DARPA – look like?

    If we want to support visionary research, whose applications may be 10-20 years away, we should be prepared to be innovative – even experimental – in the way we fund research. And just as we need to be prepared for research not to work out as planned, we should be prepared to take some risks in the way we support it, especially if the result is less bureaucracy. There are some lessons to take from the long (and, it needs to be stressed, not always successful) history of ARPA/DARPA. To start with its operating philosophy, an agency inspired by ARPA should be built around the vision of the programme managers. But the operating philosophy needs to be underpinned by as enduring mission and clarity about who the primary beneficiaries of the research should be. And finally, there needs to be a deep understanding of how the agency fits into a wider innovation landscape. Continue reading “UK ARPA: An experiment in science policy?”

    More reactions to “Resurgence of the Regions”

    The celebrity endorsement of my “Resurgence of the regions” paper has led to a certain amount of press interest, which I summarise here.

    The Times Higher naturally focuses on the research policy issues. I’m interviewed in the piece “Tory election victory sets scene for UK research funding battle”, which focuses on a perceived tension between a continuing emphasis on supporting “excellence” and disruptive innovation based on existing centres, and my agenda of boosting R&D in the regions to redress productivity imbalances.

    Peter Franklin asks, in UnHerd, “Is this the Tories’ real manifesto?”

    “Alas, no”, I expect is the answer to that question, but this article does a really great job of summarising the content of my paper. It also includes this hugely generous quotation from Stian Westlake: “The mini-storm over Dom Cummings citing @RichardALJones’s recent paper on innovation policy prompted me to re-read it, and *boy* is it good. I agree with more or less everything, and as a bonus it is delightfully written… On a couple of occasions I’ve been asked by a new science minister ‘what should I read on innovation?’, and it was always quite a hard question to answer. But now, I’d just say ‘read that’.”

    I suspect Franklin’s excellent article was instrumental in focusing some wider attention on my paper. The Sunday Times’s Economics Editor, David Smith, agreed that “A renewed focus on innovation can deliver a resurgence in the regions”, while Oliver Wright, in the Times, focused on the industrial strategy implications of the net zero greenhouse gas target, and in particular nuclear energy, in a piece entitled “Reinvigorate north with nuclear power stations”.

    It was left to Alan Lockey, writing in CapX, to point out the tension between the government activism I call for and more traditional laissez-faire Conservative attitudes, putting this tension at the centre of what he called “The coming battle for modern Conservatism”. On the one hand, Lockey described the arguments as being “a bit boring”, “comfort-zone industrial policy instincts of Ed Miliband-era social democracy” from “a hitherto politically obscure physicist”… but he also found it “as an object lesson in how to construct an expansive and data-rich case for systemic public policy change … pretty near faultless. The ideas too, I find to be entirely unproblematic”. As he later graciously put it on Twitter, “I was merely just trying to convey that it seemed less controversial perhaps to those of us who are, basically, boring social democrats who see nothing wrong with industrial activism!”

    On being endorsed by Dominic Cummings

    The former chief advisor to the Prime Minister, Dominic Cummings, wrote a blogpost yesterday about the need for leave voters to mobilise to make sure the Conservatives are elected on the 12 December. At the end of the post, he writes “Ps. If you’re interested in ideas about how the new government could really change our economy for the better, making it more productive and fairer, you’ll find this paper interesting. It has many ideas about long-term productivity, science, technology, how to help regions outside the south-east and so on, by a professor of physics in Sheffield”. He’s referring to my paper “A Resurgence of the Regions: rebuilding innovation capacity across the whole UK”.

    As I said on Twitter,“Pleased (I think) to see my paper “Resurgence of the regions” has been endorsed in Dominic Cummings’s latest blog. Endorsement not necessarily reciprocated, but all parties need to be thinking about how to grow productivity & heal our national divides”.

    I provided a longer reaction to a Guardian journalist, which resulted in this story today: Academic praised by Cummings is remain-voting critic of Tory plans. Here are the comments I made to the journalist which formed the basis of the story:

    I’m pleased that Dominic Cummings has endorsed my paper “Resurgence of the regions”. I think the analysis of the UK’s current economic weaknesses is important and we should be talking more about it in the election campaign. I single out the terrible record of productivity growth since the financial crisis, the consequences of that in terms of flat-lining wages, the role of the weak economy in the fiscal difficulties the government has in balancing the books, and (as others have done) the profound regional disparities in economic performance across the country. I’d like to think that Cummings shares this analysis – the persistence of these problems, though, is hardly a great endorsement for the last 9.5 years of Conservative-led government.

    In response to these problems we’re going to need some radical changes in the way we run our economy. I think science and innovation is going to be important for this, and clearly Cummings thinks that too. I also offer some concrete suggestions for how the government needs to be more involved in driving innovation – especially in the urgent problem we have of decarbonising our energy supply to meet the target of net zero greenhouse gas emissions by 2050. It’s good that the Conservative Party has signed up to a 2050 Net Zero Greenhouse Gas target, but the scale of the measures it proposes are disappointingly timid – as I explain in my paper, reaching this goal is going to take much more investment, and more direct state involvement in driving innovation to increase the scale and drive the cost down of low carbon energy. This needs to be a central part of a wider industrial strategy.

    I welcome all three parties’ commitment to raise the overall R&D intensity of the economy (to 2.4% of GDP by 2027 for the Conservatives, 3% of GDP by 2030 for Labour, 2.4% by 2027 with longer term aspiration for 3% for the Lib Dems). The UK’s poor record of R&D investment compared to other developed countries is surely a big contributing factor to our stagnating productivity. But this is also a stretching target – we’re currently at 1.7%. It’s going to need substantial increases in public spending, but even bigger increases in R&D investment from the private sector, and we’re going to need to see much more concrete plans for how government might get this might happen. Again, my paper has some suggestions, with a particular focus on building new capacity in those parts of the country where very little R&D gets done – and which, not coincidentally, have the worst economic performance (Wales, Northern Ireland, the North of England in particular).

    As for Cummings’s views on Brexit: I voted remain, not least because I thought that a “leave” vote would result in a period of very damaging political chaos for the UK. I can’t say that subsequent events have made me think I was wrong on that. I do think that it would be possible for the UK to do ok outside the EU, but to succeed post-Brexit we’ll need to stay close to Europe in matters such as scientific cooperation (preferably through associating with EU science programmes like the European Research Council),and in matters related to nuclear technology. We will need to be a country that welcomes talented people from overseas, and provides an attractive destination for overseas investment – particularly important for innovation, where more than half of the UK’s business R&D is done by overseas owned firms. The need to have a close relationship with our major trading partners will mean that we’ll need to stay in regulatory alignment with the EU (very important, for example, for the chemicals industry) and minimise frictions for industries, like the automotive industry where the UK is closely integrated into European supply chains, and in the high value knowledge based services which are so important for the UK economy. It doesn’t look like that’s the direction of travel the Conservatives are currently going down.

    Whatever happens in the next election, anyone who has any ambition to heal the economic and social divides in this country needs to be thinking about the issues I raise in my paper.

    Rock climbing and the economics of innovation

    The rock climber Alex Honnold’s free, solo ascent of El Capitan is inspirational in many ways. For economist John Cochrane, watching the film of the ascent has prompted a blogpost: “What the success of rock climbing tells us about economic growth”. He concludes that “Free Solo is a great example of the expansion of ability, driven purely by advances in knowledge, untethered from machines.” As an amateur in both rock climbing and innovation theory, I can’t resist some comments of my own. I think it’s all a bit more complicated than Cochrane thinks. In particular his argument that Honnold’s success tells us that knowledge – and the widespread communication of knowledge – is more important than new technology in driving economic growth doesn’t really stand up.

    The film “Free Solo” shows Honnold’s 2017 ascent of the 3000 ft cliff El Capitan, in the Yosemite Valley, California. The climb was done free (i.e. without the use of artificial aids like pegs to make progress), and solo – without ropes or any other aids to safety. How come, Cochrane asks, rock climbers have got so much better at climbing since El Cap’s first ascent in 1958, which took 47 days, done with “siege tactics” and every artificial aid available at the time? “There is essentially no technology involved. OK, Honnold wears modern climbing boots, which have very sticky rubber. But that’s about it. And reasonably sticky rubber has been around for a hundred years or so too.”

    Hold on a moment here – no technology? I don’t think the history of climbing really bears this out. Even the exception that Cochrane allows, sticky rubber boots, is more complicated than he thinks.

    When the modern sport of climbing began, more than a hundred years ago, people wore boots – nailed boots – on their feet (as they would do for pretty much any outdoor activity). There is a lost technology of the best types of nails and nailing patterns to use, but it’s true that, as harder climbs were done in the 1920s and 30s, the leading climbers of the day tended to use tennis shoes or plimsolls for the hardest climbs. But these were everyday footwear, in no way designed for climbing.

    I believe the first shoes designed specifically for rock climbing, of the kind that would be recognised as the ancestors of today’s shoes, came from France. These were designed by the alpinist Pierre Allain for use on the sandstone boulders of the Fontainbleau forest, a favoured training ground for the climbers of Paris. By the time I started climbing, in the 1970’s, the descendants of these shoes – the EB Super Gratton- had an almost complete worldwide monopoly on climbing shoes. They were characterised by a tight fit, a treadless rubber sole and a wide rand, allowing precise footwork and good friction on dry rock.

    In 1982 the makers of EBs made a “New Coke” like marketing blunder, introducing a new model with a moulded sole – probably cheaper to manufacture, but thicker and less precise than the original. This might not have mattered given their existing market position, but a then unheard of Spanish shoe company – Boreal – had recently introduced a model of their own, with a sole made of a new kind of high friction rubber.

    Rubber is a strange material, and the microscopic origins of friction in rubber are different to those in more conventional materials like metals. When a climber steps on a tiny foothold, the sole starts to slide against the rock, very slowly, usually imperceptibly. As the rubber slides past the asperities, the internal motions within the bulk of the rubber, of molecule against molecule, dissipate energy – and the greater the rate of energy dissipation, the higher the friction. This energy dissipation, though, is a very strongly peaked function of temperature – and as a consequence, a given rubber compound will have a temperature at which the friction is at a maximum.

    Boreal, by accident or design, had found a rubber compound where the friction peaked much closer to room temperature than in EBs. Boreal’s new climbing boot – the “Firé” – swept the marketplace. The increased friction and the advantage this gave was obvious both to the leading climbers of the day, and the much more average performers. I was in the latter category, and succumbed to the trend. The improvement in performance the new shoes made possible was immediately tangible, the only downside being that Firés were cripplingly uncomfortable. Soon US and Italian competitors started selling boots with comparably high friction rubber that were actually foot-shaped.

    Modern rock boots do make a difference, but this isn’t really the crucial technology that has enabled hard rock climbing. What’s made the biggest difference – both to the wider popularity of the sport and the achievements of its leading proponents – has been the development of technologies that allow one to fall off without dying.

    Hold on, you might say here – wasn’t Alex Honnold climbing solo, without ropes, in a situation in which if he fell he would most certainly die? Yes, indeed, but Honnold didn’t get to be a good climber by doing a lot of soloing, he got to be a good soloist by doing a lot of climbing. Most of that climbing – especially the climbing where he was pushing himself – was done roped. To get himself ready for his El Cap solo, he spent hundreds of hours on the route, roped, working out and memorising all the moves.

    When climbing started, every climb was effectively a solo, at least for the leader. Before the 2nd World War, climbing ropes were made of natural fibres – hemp or manila. They were strong – strong enough to hold a slip of a second on the rope. But they were brittle, and for the leader, any fall that would put a shock load on the rope was likely to break it. “The leader must not fall” was the stern instruction of books of the time. The knowledge that a fall would lead to death or serious injury was ever-present for a pre-war climber pioneering a new hard route, and it’s not difficult to imagine that this was a brake on progress.

    As in other areas of technology, the war changed things. The new artificial fibre nylon was put into mass production for parachute cord for aircrew and airborne troops; its strength, resilience and elasticity made the postwar surpluses of the fibre ideal for making climbing ropes. Together with the invention of metal snap-links they made it possible to imagine a leader surviving a fall – the rope could be clipped to an anchor in the rock to make a “running belay”, limiting the length of the fall. In the USA and the European Alps, the anchors would usually be metal pegs hammered into cracks, while on the smaller crags of the UK a tradition developed of using jammed machine nuts threaded on loops of nylon..

    By the 1960’s and 70’s, the likelihood was that a leader would survive a fall, but you wouldn’t want to do it too often. The job of arresting the fall went to the second, who would pass the rope round their back and use the force of their grip and the friction of the rope around their body to hold the fall. You had to be attentive, quick and decisive to do this without getting a bad friction burn, or at worst letting the rope go entirely. The crudest mechanical friction devices were devised in the early 70’s, and have now been developed to the point that a second no longer needs strength or skill to hold the rope of a falling climber. Meanwhile the leader would be tied on to the rope with a simple knot round the waist, making a fall painful – and a prolonged period of dangling, after a fall from overhanging rock, potentially fatal through asphyxiation. Simple but effective harnesses were developed in the 60’s and 70’s, which spread the force of arresting a fall onto the buttocks and thighs, and made the sudden stop at the end of a leader fall bearable, if not entirely comfortable.

    In California, it was the particular character of the rock and the climbs, especially in Yosemite, that drove developments in the technology for anchoring the rope to the rock. Yvon Chouinard realised that the mild steel pegs used in the European Alps weren’t suitable for the hard granite of Yosemite, and he developed optimally shaped pegs from hard chrome-molybdenum alloy steel – the bongs, blades and leepers that I just about remember from my youth. But like other technological developments, this one had its downsides – the repeated placement and removal of these pegs from the cracks led to scarring and damage, which in the climate of heightened environmental awareness in the 60’s and 70’s led to some soul-searching by US climbers. A “clean-climbing” movement developed, with Chouinard himself one of its leaders. To replace steel pegs as anchors, the British tradition of jammed machine nuts as anchors was developed. Purpose designed chocks and wedges were marketed, like Chouinard’s cunningly designed “hexcentrics”, which would cam under load to hold even in parallel sided cracks.

    It was another Californian devotee of Yosemite that made the real breakthrough in clean climbing protection, though. Ray Jardine, an aerospace engineer, devised an ingenious spring-loaded camming device that was easily placed and would hold a fall even if placed in a parallel sided or slightly flared crack. These were patented and commercialised as “Friends”. Many developments of this idea have since been put on the market, and these form the basis of the “rack” of anchoring equipment that climbers carry today.

    It’s this combination of strong and resilient nylon ropes, able to absorb the energy of a long fall, automatic braking gadgets to hold the rope when a fall happens, reliable devices for anchoring the rope to the rock, and harnesses that spread the load of a fall across the climbers body, that have got us to where we are today, where climbers can practise harder and harder routes, (mostly) safe in the knowledge that a fall won’t be fatal, or even that uncomfortable.

    This is not to say that knowledge isn’t important, of course. All this equipment needs skill to use – and knowledge has helped in the sheer physical aspects of getting up steep rock. As well as the new technology, one of the causes of the big advances in rock climbing standards in the 1980’s was undoubtedly a change in attitude amongst leading climbers. Training was taken much more seriously than it had been before: training techniques were imported from athletics and gymnastics, artificial climbing walls were developed, and the discipline of trying out very hard moves close to the ground on boulders – pioneered by the American mathematician and gymnast John Gill – became popular.

    I think one kind of knowledge is particularly important in climbing – and maybe in other areas of human endeavour, too. That’s simply the knowledge that something has already been done – the existence proof that a feat is possible. Guidebooks record that a climb has been done and where it goes, though not usually how to do it. To know in advance the physical details of how a climb is done – what climbers call “beta” – is considered to lessen the achievement of a subsequent ascent. But simply to know that the climb is possible (and have some idea of how hard it is going to be) is an important piece of information in itself.

    How is knowledge transmitted? We have books – instructional books of technique, and guidebooks to particular climbing areas. And now we have the internet, so one can read and post questions on climbers internet forums. I’m not sure how much this has added to more traditional ways of conveying information – discussions on the most popular UK climbing forum seem to mostly consist of endless arguments about Brexit. But I do think there is one change that modern times have brought that makes a huge difference to knowledge transmission, and that is the advent of cheap air travel.

    My first overseas climbing trips (in 1981 and 1982) were to the French Alps. These were hugely important to my development as a climber, and undoubtedly some part of that came from interactions with climbers from other countries with different traditions and different techniques. Big climbing centres tended to have well known places where climbers from different countries stayed and mixed (the squalid informal campsite known as Snell’s Field in the case of Chamonix, the legendary Camp 4 for Yosemite). I climbed with a couple of outstanding Australian climbers from the campsite while I was there, we picked up tips on big wall climbing from a Yosemite habitué, and I came home with half a dozen beautiful titanium ice screws, light, thin walled, and sharp. Such things were unobtainable in the West at the time; I’d bartered them from some East European climbers, but they had undoubtedly been knocked off after hours in some Soviet aircraft factory.

    But getting to Chamonix had taken me nearly 24 hours on a bus. Nowadays climbers can take several holidays a year with easy and cheap air travel, to the sunshine in Spain or Greece or Thailand, the big mountains of the Himalayas or South America, desert climbing in Morocco, Jordan, or Oman, Nevada, Utah, or Arizona, to the subarctic conditions of Patagonia or Baffin Island, or to the more traditional centres like the Dolomites or Yosemite. This does lead to a rapid spread of attitudes and techniques. It’s a paradox, of course, that climbers, who love the wilderness and the world’s beautiful places, and are more environmentally conscious than most, make, through their flying, such an above average contribution to climate change. Can this go on?

    So if John Cochrane has learnt the wrong lesson from rock climbing, what better lessons should we take away from all this?

    Some economists love simple stories, especially when they support their ideological priors, but a bit of knowledge of history often reveals that the truth is somewhat more complicated. More importantly, perhaps, we should remember that technological innovation isn’t just about iPhones and artificial intelligence. All around us – in our homes, in everyday life, in our hobbies and pastimes – we can see, if we care to look, the products of all kinds of technological innovation in products and the materials that make them, that collectively lead to overall economic growth. Technological innovation doesn’t have to be about giant leaps and moonshots – even mundane things like shoe soles and ropes tell a story of a whole series of incremental changes that together add up to progress.

    And to return to Alex Honnold, perhaps the most important lesson a free-market loving economist should draw is that sometimes people will do extraordinary things without the motivation of money.

    The challenge of deep decarbonisation

    This is roughly the talk I gave in the neighbouring village of Grindleford about a month ago, as part of a well-attended community event organised by Grindleford Climate Action.

    Thanks so much for inviting me to talk to you today. It’s great to see such an impressive degree of community engagement with what is perhaps the defining issue we face today – climate change. What I want to talk about today is the big picture of what we need to do to tackle the climate change crisis.

    The title of this event is “Without Hot Air” – I know this is inspired by the great book “Sustainable Energy without the Hot Air”, by the late David McKay. David was a physicist at the University of Cambridge; he wrote this book – which is free to download – because of his frustration with the way the climate debate was being conducted. He became Chief Scientific Advisor to the Department of Energy and Climate Change in the last Labour government, but died, tragically young at 49, in 2016.

    His book is about how to make the sums add up. “Everyone says getting off fossil fuels is important”, he says, “and we’re all encouraged to ‘make a difference’, but many of the things that allegedly make a difference don’t add up.“

    It’s a book about being serious about climate change, putting into numbers the scale of the problem. As he says “if everyone does a little, we’ll achieve only a little.”

    But to tackle climate change we’re going to need to do a lot. As individuals, we’re going to need to change the way we live. But we’re going to need to do a lot collectively too, in our communities, but also nationally – and internationally – through government action.

    Net zero greenhouse gas emission by 2050?

    The Government has enshrined a goal of achieving net zero greenhouse gas emissions by 2050 in legislation. This is a very good idea – it’s a better target than a notional limit on the global temperature rise, because it’s the level of greenhouse gas emissions that we have direct control over.

    But there are a couple of problems.

    We’ve emitted a lot of greenhouse gases already, and even if we – we being the whole world here – reach the 2050 target, we’ll have emitted a lot more. So the target doesn’t stop climate change, it just limits it – perhaps to 1.5 – 2° of warming or so.

    Even worse, the government just isn’t being serious about doing what would need to be done to reach the target. The trouble is that 2050 sounds a long way off for politicians who think in terms of 5 year election cycles – or, indeed, at the moment, just getting through the next week or two. But it’s not long in terms of rebuilding our economy and society.

    Just think how different is the world now to the world in 1990. In terms of the infrastructure of everyday life – the buildings, the railways, the roads – the answer is, not very. I’m not quite driving the same car, but the trains on the Hope Valley Line are the same ones – and they were obsolete then! Most importantly, our energy system is still dominated by hydrocarbons.

    I think on current trajectory there is very little chance of achieving net zero greenhouse gas emissions by 2050 – so we’re heading for 3 or 4 degrees of warming, a truly alarming and dangerous prospect. Continue reading “The challenge of deep decarbonisation”

    What do we mean by scientific productivity – and is it really falling?

    This is the outline of a brief talk I gave as part of the launch of a new Research on Research Institute, with which I’m associated. The session my talk was in was called “PRIORITIES: from data to deliberation and decision-making
. How can RoR support prioritisation & allocation by governments and funders?”

    I want to focus on the idea of scientific productivity – how it is defined, and how we can measure it – and whether it is declining – and if it is, what can we do about it?

    The output of science increases exponentially, by some measures…

    …but what do we get back from that? What is the productivity of the scientific enterprise – the output of the enterprise, as defined by some measure of the output of science per unit input?

    It depends on what we think the output of science is, of course.

    We could be talking of some measure of the new science being produced and its impact within the scientific community.

    But I think many of us – from funders to the wider publics who support that science – might also want to look outside the scientific community. How can we measure the effectiveness with which scientific advances are translated into wider socio-economic goals? As the discourses of “grand challenges” and “mission driven” research become more widely taken up, how will we tell whether those challenges and missions have been met?

    There is a gathering sense that the productivity of the global scientific endeavour is declining or running into diminishing returns. A recent article by Michael Nielsen and Patrick Collison asserted that “Science is getting less bang for its buck”, while a group of distinguished economists have answered in the affirmative their own question: “Are ideas getting harder to find?” This connects to the view amongst some economists, that we have seen the best of economic growth and are living in a new age of stagnation.

    Certainly the rate of innovation in some science-led industries seems to be slowing down. The combination of Moore’s law and Dennard scaling which brought us exponential growth in computing power in the 80’s and 90’s started to level off around 2004 and has since slowed to a crawl, despite continuing growth in resources devoted to it. Continue reading “What do we mean by scientific productivity – and is it really falling?”

    It’s the Industrial that enables the Artisanal

    It’s come to this, even here. My village chippy has “teamed up” with a “craft brewery” in the next village to sell “artisanal ales” specially brewed to accompany one’s fish and chips. This prompts me to reflect – is this move from the industrial to the artisanal really a reversion to a previous, better world? I don’t think so – instead, craft beer is itself a product of modernity. It depends on capital equipment that is small scale, but dependent on high technology – on stainless steel, electrical heating and refrigeration, computer powered process control. And its ingredients aren’t locally grown and processed – the different flavours introduced by new hop varieties are the outcome of world trade. What’s going on here is not a repudiation of industrialisation, but its miniaturisation, the outcome of new technologies which erode previous economies of scale.

    A craft beer from the Eyam Brewery, on sale at the Toll Bar Fish and Chip Shop, Stoney Middleton, Derbyshire.

    Beer was one of the first industrial foodstuffs. In Britain, the domestic scale of early beer making began to be replaced by factory scale breweries in the 18th century, as soon as transport improved enough to allow the distribution of their products beyond their immediate locality. Burton-on-Trent was an early centre, whose growth was catalysed by the opening up of the Trent navigation in 1712. This allowed beer to be transported by water via Hull to London and beyond. By the late 18th century some 2000 barrels a year of Burton beer were being shipped to Baltic ports like Danzig and St Petersburg.

    Like other process industries, this expansion was driven by fossil fuels. Coal from the nearby Staffordshire and Derbyshire coalfields provided process heat. The technological innovation of coking, which produced a purer carbon fuel which burnt without sulphur containing fumes, was developed as early as 1640 in Derby, so coal could be used to dry malt without introducing off-flavours (this use of coke long predated its much more famous use as a replacement for charcoal in iron production).

    By late 19th century, Burton on Trent had become a world centre of beer brewing, producing more than 500 million litres a year, for distribution by the railway network throughout the country and export across the world. This was an industry that was fossil fuel powered and scientifically managed. Coal powered steam engines pumped the large volumes of liquid around, steam was used to provide controllable process heat, and most crucially the invention of refrigeration was the essential enabler of year-round brewing, allowing control of temperature in the fermentation process, by-now scientifically understood by the cadre of formally trained chemists employed by the breweries. In a pint of Marston’s Pedigree or a bottle of Worthington White Shield, what one is tasting is the outcome of the best of 19th century food industrialisation, the mass production of high quality products at affordable prices.

    How much of the “craft beer revolution” is a departure from this industrial past? The difference is one of scale – steam engines are replaced by electric pumps, coal fired furnaces by heating elements, and master brewers by thermostatic control systems. Craft beer is not a return to preindustrial, artisanal age – instead it’s based on industrial techniques, miniaturised with new technology, and souped up by the products of world trade. This is a specific example of a point more generally made in Rachel Laudan’s excellent book “Cuisine and Empire” – so-called artisanal food comes after industrial food, and is in fact enabled by it.

    What more general lessons can we learn from this example? The energy economy is another place where some people are talking about a transition from a system that is industrial and centralised to one that is small scale and decentralised – one might almost say “artisanal”. Should we be aiming for a new decentralised energy system – a world of windmills and solar cells and electric bikes and community energy trusts?

    To some extent, I think this is possible and indeed attractive, leading to a greater sense of control and involvement by citizens in the provision of energy. But we should be under no illusions – this artisanal also has to be enabled by the industrial. Continue reading “It’s the Industrial that enables the Artisanal”

    Carbon Capture and Storage: technically possible, but politically and economically a bad idea

    It’s excellent news that the UK government has accepted the Climate Change Committee’s recommendation to legislate for a goal of achieving net zero greenhouse emissions by 2050. As always, though, it’s not enough to will the end without attending to the means. My earlier blogpost stressed how hard this goal is going to be to reach in practise. The Climate Change Committee does provide scenarios for achieving net zero, and the bad news is that the central 2050 scenario relies to a huge extent on carbon capture and storage. In other words, it assumes that we will still be burning fossil fuels, but we will be mitigating the effect of this continued dependence on fossil fuels by capturing the carbon dioxide released when gas is burnt and storing it, into the indefinite future, underground. Some use of carbon capture and storage is probably inevitable, but in my view such large-scale reliance on it is, politically and economically, a bad idea.

    In the central 2050 net zero scenario, 645 TWh of electricity is generated a year – more than doubled from 2017 value of 300 TWh, reflecting the electrification of sectors like transport. The basic strategy for deep decarbonisation has to be, as a first approximation, to electrify everything, while simultaneously decarbonising power generation: so far, so good.

    But even with aggressive expansion of renewable electricity, this scenario still calls for 150 TWh to be generated from fossil fuels, in the form of gas power stations. To achieve zero carbon emissions from this fossil fuel powered electricity generation, the carbon dioxide released when the gas is burnt has to be captured at the power stations and pumped through a specially built infrastructure of pipes to disused gas fields in the North Sea, where it is injected underground for indefinite storage. This is certainly technically feasible – to produce 150 TWh of electricity from gas, around 176 million tonnes of carbon dioxide a year will be produced. For comparison currently about 42 million tonnes of natural gas a year is taken out of the North Sea reservoirs, so reversing the process at four times the scale is undoubtedly doable.

    In fact, more carbon capture and storage will be needed than the 176 million tonnes from the power sector, because the zero net greenhouse gas plan relies on it in four distinct ways. In addition to allowing us to carry on burning gas to make electricity, the plan envisages capturing carbon dioxide from biomass-fired power stations too. This should lead to a net lowering of the amount of carbon dioxide in the atmosphere, amounting to a so-called “negative emissions technology”. The idea of these is one offsets the remaining positive carbon emissions from hard to decarbonise sectors like aviation with these “negative emissions” to achieve overall net zero emissions.

    Meanwhile the plan envisages the large scale conversion of natural gas to hydrogen, to replace natural gas in industry and domestic heating. One molecule of methane produces two molecules of hydrogen, which can be burnt in domestic boilers without carbon emissions, and one of carbon dioxide, which needs to be captured at the hydrogen plant and pumped away to the North Sea reservoirs. Finally some carbon dioxide producing industrial processes will remain – steel making and cement production – and carbon capture and storage will be needed to render these processes zero carbon. These latter uses are probably inevitable.

    But I want to focus on the principal envisaged use of carbon capture and storage – as a way of avoiding the need to move to entirely low carbon electricity, i.e. through renewables like wind and solar, and through nuclear power. We need to take a global perspective – if the UK achieves net zero greenhouse gas status by 2050, but the rest of the world carries on as normal, that helps no-one.

    In my opinion, the only way we can be sure that the whole world will decarbonise is if low carbon energy – primarily wind, solar and nuclear – comes in at a lower cost than fossil fuels, without subsidies or other intervention. The cost of these technologies will surely come down: for this to happen, we need both to deploy them in their current form, and to do research and development to improve them. We need both the “learning by doing” that comes from implementation, and the cost reductions that will come from R&D, whether that’s making incremental process improvements to the technologies as they currently stand, or developing radically new and better versions of these technologies.

    But we will never achieve these technological improvements and corresponding cost reductions for carbon capture and storage.

    It’s always tempting fate to say “never” for the potential for new technologies – but there’s one exception, and that’s when a putative new technology would need to break one of the laws of thermodynamics. No-one has ever come out ahead betting against these.

    To do carbon capture and storage will always need additional expenditure over and above the cost of an unabated gas power station. It needs both:

  • up-front capital costs for the plant to separate the carbon dioxide in the first place, infrastructure to pipe the carbon dioxide long distances and pump it underground,
  • lowered conversion efficiencies and higher running costs – i.e. more gas needs to be burnt to produce a given unit of electricity.
  • The latter is an inescapable consequence of the second law of thermodynamics – carbon capture will always need a separation step. Either one needs to take air and separate it into its component parts, taking out the pure oxygen, so one burns gas to produce a pure waste stream consisting of carbon dioxide and water. Or one has to take the exhaust from burning the gas in air, and pull out the carbon dioxide from the waste. Either way, you need to take a mixed gas and separate its components – and that always takes an energy input to drive the loss of entropy that follows from separating a mixture.

    The key point, then, is that no matter how much better our technology gets, power produced by a gas power station with carbon capture and storage will always be more expensive that power from unabated gas. The capital cost of the plant will be greater, and so will the revenue cost per kWh. No amount of technological progress can ever change this.

    So there can only be a business case for carbon capture and storage through significant government interventions in the market, either through a subsidy, or through a carbon tax. Politically, this is an inherently unstable situation. Even after the capital cost of the carbon capture infrastructure has been written off, at any time the plant operator will be able to generate electricity more cheaply by releasing the carbon dioxide produced when the gas is burnt. Taking an international perspective, this leads to a massive free rider problem. Any country will be able to gain a competitive advantage at any time by turning the carbon capture off – there needs to be a fully enforced international agreement to impose carbon taxes at a high enough level to make the economics work. I’m not confident that such an agreement – which would have to cover every country making a significant contribution to carbon emissions to be effective – can be relied to hold on the scale of many decades.

    I do accept that some carbon and capture and storage probably is essential, to capture emissions from cement and steel production. But carbon capture and storage from the power sector is a climate change solution for a world that does not exist any more – a world of multilateral agreements and transnational economic rationality. Any scenario that relies on carbon capture and storage is just a politically very risky way of persuading ourselves that fossil-fuelled business as usual is sustainable, and postponing the necessary large scale implementation and improvement through R&D of genuine low carbon energy technologies – renewables like wind and solar, and nuclear.