Physical limits and diminishing returns of innovation

Are ideas getting harder to find? This question is asked in a preprint with this title by economists Bloom, Jones, Van Reenan and Webb, who attempt to quantify decreasing research productivity, showing for a number of fields that it is currently taking more researchers to achieve the same rate of progress. The paper is discussed in blogposts by Diane Coyle, who notes sceptically that the same thing was being said in 1983, and by Alex Tabarrok, who is more depressed.

Given the slowdown in productivity growth in the developed nations, which has steadily fallen from about 2.9% a year in 1970 to about 1.2% a year now, the notion is certainly plausible. But the attempt in the paper to quantify the decline is, I think, so crude as to be pretty much worthless – except inasmuch as it demonstrates how much growth economists need to understand the nature of technological innovation at a higher level of detail and particularity than is reflected in their current model-building efforts.

The first example is the familiar one of Moore’s law in semiconductors, where over many decades we have seen exponential growth in the number of transistors on an integrated circuit. The authors argue that to achieve this, the total number of researchers has increased by a factor of 25 or so since 1970 (this estimate is obtained by dividing the R&D expenditure of the major semiconductor companies by an average researcher wage). This is very broadly consistent with a corollary of Moore’s law (sometimes called Rock’s Law). This states that the capital cost of new generations of semiconductor fabs is also growing exponentially, with a four year doubling time; this cost is now in excess of $10 billion. A large part of this is actually the capitalised cost of the R&D that goes into developing the new tools and plant for each generation of ICs.

This increasing expense simply reflects the increasing difficulty of creating intricate, accurate and reproducible structures on ever-decreasing length scales. The problem isn’t that ideas are harder to find, it’s that as these length scales approach the atomic, many more problems arise, which need more effort to solve them. It’s the fundamental difficulty of the physics which leads to diminishing returns, and at some point a combination of the physical barriers and the economics will first slow and then stop further progress in miniaturising electronics using this technology.

For the second example, it isn’t so much physical barriers as biological ones that lead to diminishing returns, but the effect is the same. The green revolution – a period of big increases in the yields of key crops like wheat and maize – was driven by creating varieties able to use large amounts of artificial fertiliser and focus much more of their growing energies into the useful parts of the plant. Modern wheat, for example, has very short stems – but there’s a limit to how short you can make them, and that limit has probably been reached now. So R&D efforts are likely to be focused in other areas than pure yield increases – in disease resistance and tolerance of poorer growing conditions (the latter likely to be more important as climate changes, of course).

For their third example, the economists focus on medical progress. I’ve written before about the difficulties of the pharmaceutical industry, which has its own exponential law of progress. Unfortunately this goes the wrong way, with cost of developing new drugs increasing exponentially with time. The authors focus on cancer, and try to quantify declining returns by correlating research effort, as measured by papers published, with improvements in the five year cancer survival rate.

Again, I think the basic notion of diminishing returns is plausible, but this attempt to quantify it makes no sense at all. One obvious problem is that there are very long and variable lag times between when research is done, through the time it takes to test drugs and get them approved, to when they are in wide clinical use. To give one example, the ovarian cancer drug Lynparza was approved in December 2014, so it is conceivable that its effects might start to show up in 5 year survival rates some time after 2020. But the research it was based on was published in 2005. So the hope that there is any kind of simple “production function” that links an “input” of researchers’ time with an “output” of improved health, (or faster computers, or increased productivity) is a non-starter*.

The heart of the paper is the argument that an increasing number or researchers are producing fewer “ideas”. But what can they mean by “ideas”? As we all know, there are good ideas and bad ideas, profound ideas and trivial ideas, ideas that really do change the world, and ideas that make no difference to anyone. The “representative idea” assumed by the economists really isn’t helpful here, and rather than clarifying their concept in the first place, they redefine it to fit their equation, stating, with some circularity, that “ideas are proportional improvements in productivity”.

Most importantly, the value of an idea depends on the wider technological context in which it is developed. People claim that Leonardo da Vinci invented the helicopter, but even if he’d drawn an accurate blueprint of a Chinook, it would have had no value without all the supporting scientific understanding and technological innovations that were needed to make building a helicopter a practical proposition.

Clearly, at any given time there will be many ideas. Most of these will be unfruitful, but every now and again a combination of ideas will come together with a pre-existing technical infrastructure and a market demand to make a viable technology. For example, integrated circuits emerged in the 1960’s, when developments in materials science and manufacturing technology (especially photolithography and the planar process) made it possible to realise monolithic electronic circuits. Driven by customers with deep pockets and demanding requirements – the US defense industry – many refinements and innovations led to the first microprocessor in 1971.

Given a working technology and a strong market demand to create better versions of that technology, we can expect a period of incremental improvement, often very rapid. A constant rate of fractional improvement leads, of course, to exponential growth in quality, and that’s what we’ve seen over many decades for integrated circuits, giving us Moore’s law. The regularity of this improvement shouldn’t make us think it is automatic, though – it represents many brilliant innovations. Here, though, these innovations are coordinated and orchestrated so that in combination the overall rate of innovation is maintained. In a sense, the rate of innovation is set by the market, and the resources devoted to innovation increased to maintain that rate.

But exponential growth can never be sustained in a physical (or biological) system – some limit must always be reached. From about 1750 to 1850, the efficiency of steam engines increased exponentially, but despite many further technical improvements, this rate of progress slowed down in the second half of the 19th century – the second law of thermodynamics, through Carnot’s law, puts a fundamental upper limit on efficiency and as that limit is approached, diminishing returns set in. Likewise, the atomic scale of matter puts fundamental limits on how far the CMOS technology of our current integrated circuits can be pushed to smaller and smaller dimensions, and as those limits are approached, we expect to see the same sort of diminishing returns.

Economic growth didn’t come to an end in 1850 when the exponential rise in steam engine efficiencies started to level out, though. Entirely new technologies were developed – electricity, the chemical industry, the internal combustion engine powered motor car – which went through the same cycle of incremental improvement and eventual saturation.

The question we should be asking now is not whether the technologies that have driven economic growth in recent years have reached the point of diminishing returns – if they have, that is entirely natural and to be expected. It is whether enough entirely new technologies are now entering infancy, from which they can take-off with the sustained incremental growth that’s driven the economy in previous technology waves. Perhaps solar energy is in that state now; quantum computing perhaps hasn’t got there yet, as it isn’t clear how the basic idea can be implemented and whether there is a market to drive it.

What we do know is that growth is slowing, and has been doing so for some years. To this extent, this paper highlights a real problem. But a correct diagnosis of the ailment and design of useful policy prescriptions is going to demand a much more realistic understanding of how innovation works.

* if one insists on trying to build a model, the “production function” would need to be, not a simple function, but a functional, integrating functions representing different types of research and development effort over long periods of time.

5 thoughts on “Physical limits and diminishing returns of innovation”

  1. Interesting piece, thanks. The new technologies of the last 30 years emerged in large part from tax payer funded research, often in support of military aims as you say. The continued support of research by the tax payer is easily defended on the basis of that experience alone. What else can the government policy do? Easier to say what they should avoid, like picking winners, or being captured by industry pressure groups. Easier said than done of course.
    As an aside, Economists seem, at the moment, to have 2 diametrically opposite positions on productivity. One school frets about the current slowdown in productivity growth, resulting in slow growth and stagnation.
    Another school worries about the near term arrival of a massive growth in productivity as driverless cars, automated factories and artificial intelligence destroy jobs by the million in western economies in the next few years.
    Matthew Carney has given speeches about both scenarios without blushing.
    Can they both be right? I can’t see how.

  2. I should add that I hope my remarks don’t come across as being critical.
    I think you are quite right in concentrating on the existing problem of low productivity growth, rather than hypothetical leaps in productivity that might or might not occur. I am sure we can burn those bridges when we come to them!

  3. I agree, it’s a puzzle that two quite contradictory views should be current, sometimes coming from the same person. The only vaguely plausible way of reconciling them is to point to the long delay in the late 19th/early 20th century between the invention of electricity and its large scale impact on economic growth as an analogy for what might happen as automation and digitisation become widespread. What I think makes that argument doubtful is that we’re already quite a long way down the path of automation. Time will tell, but for the moment, as you say, we should address the actually existing problem rather than hypothetical problems in the future.

  4. Not having good intuitions about exponential growth or logistic growth is a terrible burden. I have a handful of Python functions to help me with this; it’s still difficult, but at least I know not to trust my intuition. Moore’s Law doubles “productivity” every 1.5 years, Rock’s Law doubles cost every 4 years, so the constant-cost compute power doubles every 2 years. This is a problem?

    One intuition breakdown for computing follows from the collapse of the $500 price barrier for PCs. I bought my latest PC for $65 (a Raspberry Pi 3) and it’s very nearly as powerful as the $500 one it replaces, although with a somewhat less mature OS. Smartphone prices have collapsed the same way, but the collapse isn’t visible to most pundits as prices are propped up in the US by quasi-monopolistic carrier pricing. In the form of the “internet of things” this is leading to a transformation of computing as far-reaching as the transformation of manufacturing caused by the replacement of centralized, steam driven power by decentralized fractional-horsepower electric motors.

    Solar/wind power is on the threshold of a similar transition as the unsubsidized cost of renewable power falls below the cost of fossil fueled electricity of any kind in many parts of the country and the world. The implications of decentralized electricity and wireless communications in less-developed regions seem radically unpredictable over decadal timespans. Economic measurements don’t appear to be equipped to capture this kind of transformation.

    A much deeper problem is embodied in the slogan “What Intel gives, Microsoft takes away”. That is, many problems involve searching through many combinations of configurations in order to find the best answer, or even any answer at all, and the number of combinations also grows exponentially as the size of the problem. Molecular medicine is one of these. Problems involving social change and human organizations are another. Command economies, as military organizations discovered millennia ago, and corporations more recently, use hierarchical structures to improve the complexity of coordination problems, but these structures are not so effective when bottom-up, creative problem solving is required. The same coordination costs that lead companies to slow or stop their growth when they reach a certain level of complexity also apply to entire nations. Unfortunately the creative destruction of spinoffs, mergers, and bankruptcies seems to involve wars when it occurs at national scale.

  5. Sorry about the moderating hold-up, I’ve been a bit under the weather.

    The issue with Rock’s law is that it isn’t describing the cost per transistor or the cost per IC, it’s the lumped capital cost of plant, so your calculation of the evolution of constant-cost compute power isn’t quite right. The cost per IC is surely dominated by the share of amortising the capital cost of the fab over its lifetime. If Moore’s law stopped tomorrow this wouldn’t mean that constant cost compute power stopped changing – in fact it would fall dramatically as fabs were kept in service longer. But in this sense I agree with you about the transformative power of the “internet of things” – ICs may not get much more powerful, but they will become dirt cheap.

Comments are closed.