Technological innovation in the linear age

We’re living in an age where technology is accelerating exponentially, but people’s habits of thought are stuck in an age where progress was only linear. This is the conventional wisdom of the futurists and the Davos-going classes – but it’s wrong. It may have been useful to say this 30 years ago: then we were just starting on an astonishing quarter century of year-on-year, exponential increases in computing power. In fact, the conventional wisdom is doubly wrong – now that that exponential growth in computing power has come to an end, those people who lived through that atypical period are perhaps the least well equipped to deal with what comes next. The exponential age of computing power that the combination of Moore’s law and Dennard scaling gave us, came to an end in the mid-2000’s, but technological progress will continue. But the character of that progress is different – dare I say it, it’s going to be less exponential, more linear. Now, if you need more computer power, you aren’t going to be able to wait for a year or two for Moore’s law to do its work; you’re much more likely to add another core to your CPU, another server to your datacenter. This transition is going to have big implications for business and our economy, which I don’t see being taken very seriously yet.

Just how much faster have computers got? According to the standard textbook on computer architecture, a high-end microprocessor today has nearly 50,000 times the performance of a 1978 mini-computer, at perhaps 0.25% of the cost. But the rate of increase in computing power hasn’t been uniform. A remarkable plot in this book – Computer Architecture: A Quantitative Approach (6th edn) by John Hennessy & David Patterson – makes this clear.

In the early stages of the microprocessor revolution, between 1978 and 1986, computing power was increasing at a very healthy 25% a year – a doubling time of 3 years. It was around 1986 that the rate of change really took off – between 1986 and 2003 computer power increased at an astonishing 52% a year, a doubling time of just a year and a half.

This pace of advance was checked in 2004. The rapid advance had come about from the combination of two mutually reinforcing factors. The well-known Moore’s law dictated the pace at which the transistors in microprocessors were miniaturised. More transistors per chip gives you more computing power. But there was a less well-known factor reinforcing this – Dennard scaling – which says that smaller transistors allow your computer to run faster. It was this second factor, Dennard scaling, which broke down around 2004, as I discussed in a post last year.

With Moore’s law in operation, but Dennard scaling at an end, between 2003 and 2011, computer power counted to grow, but at the slower rate of 23% – back to a 3 year doubling time. But after 2011, according to Hennessy and Patterson, the growth rate slowed further – down to 3.5% a year since 2015. In principle, this corresponds to a doubling time of 20 years – but, as we’ll see, we’re unlikely to see this happen.

This is a generational change in the environment for technological innovation, and as I discussed in my previous post, I’m surprised that it’s economic implications aren’t being discussed more. There have been signs of this stagnation in everyday life – I think people are much more likely to think twice about replacing their four year old lap-top, say, than they were a decade ago, as the benefits of these upgrades get less obvious. But the stagnation has also been disguised by the growth of cloud computing.

The impressive feats of pattern recognition that allow applications like Alexa and Siri to recognise and respond to voice commands provide a good example of the way personal computing devices give the user the impression of great computer power, when in fact the intensive computation that these applications rely on take place, not in the user’s device, but “in the cloud”. What “in the cloud’ means, of course, is that the computation is carried out by the warehouse scale computers that make up the cloud providers’ server farms.

The end of the era of exponential growth in computing power does not, of course, mean the end of innovation in computing. Rather than relying on single, general purpose CPUs to carry out many different tasks, we’ll see many more integrated circuits built with bespoke architectures optimised for specific purposes. The very powerful graphics processing units that were driven by the need to drive higher quality video displays, but which have proved well-suited to the highly parallel computing needs of machine learning are one example. And without automatic speed gains from progress in hardware, there’ll need to be much more attention given to software optimisation.

What will the economic implications be of moving into this new era? The economics of producing microprocessors will change. The cost of CPUs at the moment is dominated by the amortisation of the huge capital cost of the plant needed to make them. Older plants, whose capital costs are already written off, will find their lives being prolonged, so the cost of CPUs a generation or two behind the leading edge will plummet. This collapse in price of CPUs will be a big driver for the “internet of things”. And it will lead to the final end of Moore’s law, as the cost of new generations becomes prohibitive, squeezed between the collapse in price of less advanced processors and the diminishing returns in performance for new generations.

In considering the applications of computers, habits learnt in earlier times will need to be rethought. In the golden age of technological acceleration, between 1986 and 2003, if one had a business plan that looked plausible in principle but that relied on more computer speed than was currently available, one could argue that another few cycles of Moore’s law would soon sort out that difficulty. At the rates of technological progress in computing prevailing then, you’d only need to wait five years or so for the available computing power to increase by a factor of ten.

That’s not going to be the case now. A technology that is limited by the availability of local computing power – as opposed to computer power in the cloud – will only be able to surmount that hurdle by adding more processors, or by waiting for essentially linear growth in computer power. One example of an emerging technology that might fall into this category would be truly autonomous self-driving vehicles, though I don’t know myself whether this is the case.

The more general macro-economic implications are even less certain. One might be tempted to associate the marked slowing in productivity growth that the developed world saw in the mid-2000’s with the breakdown in Dennard scaling and the end of the fastest period of growth in computer power, but I’m not confident that this stacks up, given the widespread rollout of existing technology, coupled with much greater connectivity through broadband and mobile that was happening at that time. That roll-out, of course, has still got further to go.

This paper – by Neil Thompson – does attempt to quantify the productivity hit to ICT using firms caused by the end of Dennard scaling in 2004, finding a permanent hit to total factor productivity of between 0.5 and 0.7 percentage points for those firms that were unable to adapt their software to the new multicore architectures introduced at the time.

What of the future? It seems inconceivable that the end of the biggest driving force in technological progress over the last forty years would not have some significant macroeconomic impact, but I have seen little or no discussion of this from economists (if any readers know different, I would be very interested to hear about it). This seems to be a significant oversight.

Of course, it is the nature of all periods of exponential growth in particular technologies to come to an end, when they run up against physical or economic limits. What guarantees continued economic growth is the appearance of entirely new technologies. Steam power grew in efficiency exponentially through much of the 19th century, and when that growth levelled out (due to the physical limits of Carnot’s law) new technologies – the internal combustion engine and electric motors – came into play to drive growth further. So what new technologies might take over from silicon CMOS based integrated circuits to drive growth from here?

To restrict the discussion to computing, there are at least two ways of trying to look to the future. We can look at those areas where the laws of physics permit further progress, and the economic demand to drive that progress is present. One obvious deficiency of our current computing technology is its energy efficiency – or lack of it. There is a fundamental physical limit on the energy consumption of computing – the Landauer limit – and we’re currently orders of magnitude away from that. So there’s plenty of room at the bottom here, as it were – and as I discussed in my earlier post, if we are to increase the available computing power of the world simply by building more data centres using today’s technology before long this will be using a significant fraction of the world’s energy needs. So much lower power computing is both physically possible and economically (and environmentally) needed.

We can also look at those technologies that currently exist only in the laboratory, but which look like they have a fighting chance of moving into commercial scales sometime soon. Here the obvious candidate is quantum computing; there really does seem to be a groundswell of informed opinion that quantum computing’s time has come. In physics labs around the world there’s a real wave of excitement at that point where condensed matter physics met nanotechnology, in the superconducting properties of nanowires, for example. Experimentalists are chasing the predicted existence of a whole zoo of quasi-particles (that is quantised collective excitations) with interesting properties, with topics such as topological insulators and Majorana fermion states now enormously fashionable. The fact that companies such as Google and Microsoft have been hoovering up the world’s leading research groups in this area give further cause to suspect that something might be going on.

The consensus about quantum computing among experts that I’ve spoken to is that this isn’t going to lead soon to new platforms for general purpose computing (not least because the leading candidate technologies still need liquid helium temperatures), but that it may give users a competitive edge in specialised uses such as large database searches and cryptography. We shall see (though one might want to hesitate before making big long-term bets which rely on current methods of cryptography remaining unbreakable – some cryptocurrencies, for example).

Finally, one should not forget that information and computing isn’t the only place where innovation takes place – a huge amount of economic growth was driven be technological change before computers were invented, and perhaps new non-information based innovation might drive another future wave of economic growth.

For now, what we can say is that the age of exponential growth of computer power is over. It gave us an extraordinary 40 years, but in our world all exponentials come to an end, and we’re now firmly in the final stage of the s-curve. So, until the next thing comes along, welcome to the linear age of innovation.

3 thoughts on “Technological innovation in the linear age”

  1. > The exponential age of computing power that the combination
    > of Moore’s law and Dennard scaling gave us, came to an end
    > in the mid-2000’s. . .
    >
    > Of course, it is the nature of all periods of exponential growth
    > in particular technologies to come to an end. . . What guarantees
    > continued economic growth is the appearance of entirely new
    > technologies. . .
    >
    > Here the obvious candidate is quantum computing. . .

    Yes, if you believe in the sort of “generalized Moore’s Law”
    posited by Kurzweil et al. as the path to the technological Singularity —
    the shifts in the basis of computation from the abacus to the
    mechanical calculator, to electromechanical relays, to vacuum
    tubes, to discrete transistors, to integrated circuits SSI, MSI,
    LSI, VLSI and ULSI — then now would be a good time for the
    next “paradigm shift” to emerge. Something, you know, highly
    parallel, 3D space-filling, and energy-efficient. Maybe a
    tad more analog than digital, like those “brain” things the
    neuroscientists keep nattering on about.

    “And now, my beauty, something with Nano in it, I think.
    Something with Nano in it, but attractive to the eye, and soothing
    to the smell. . .”

    ;->

  2. Hi Richard,
    I think that the basic economic problem of the second half of the 20th Century was energy prices.!!! When energy price fluctuates, this has one of the causes of world wide recessions as you can see from searching the internet.

    This means that if renewables are able to keep on falling!!! See Saudi Arabia 1.79 cents kw/h 2017 record!!! We would be in a new age with stable energy prices forever!!! This will allow the World economy to integrate the Third World in the next 50 years at most! No need for exponential technologies!!! Hurray for Solar Nanotech!!!

    Also, in chip design do not give up yet. 3D Chips could solve their cooling problems in the next 20 years leading to an effective 1000 fold increase in computing power. This would mean that chips would have the power of a cat’s brain and Supercomputers human level computing!

    After that, if AI is to be believed, this would lead to a revolution in how Science is done! Who knows after that!

    Thanks for your blog

  3. “So, until the next thing comes along, welcome to the linear age of innovation.”
    Lots of good things were accomplished in previous linear ages. If “linear age” means going back to the Moon to build a permanent presence, and then onward to the planets, here’s to linearity.

Comments are closed.