The fourth industrial revolution – this time it’s exponential!

The World Economic Forum at Davos provides a reliable barometer of conventional wisdom amongst the globalised elite, so it’s interesting this year that, amidst all the sage thoughts on refugee crises, collapsing commodity prices and world stock market gyrations, there’s concern about the economic potential and possible dislocations from the fourth industrial revolution we are currently, it seems widely agreed, at the cusp of. This is believed to arise from the coupling of the digital and material worlds, through robotics, the “Internet of Things”, 3-d printing, and so on, together with the development of artificial intelligence to the point where it can replace the skill and judgement of highly educated and trained workers.

A report from the FT’s Izabella Kaminska of one session – Davos: Historians dream of fourth industrial revolutions – captures the flavour nicely. I’m struck by her summary of the views of the historian Niall Ferguson – “The fourth industrial revolution, Harvard’s Niall Ferguson notes, is distinctive because of its exponential rather than linear pace, not only changing what and how we do things but also potentially who we are.”

This succinctly summarises conventional wisdom, but almost every word of this statement is questionable or wrong.

Why do we talk of a fourth industrial revolution? I do think it is helpful to admit that there was more than one industrial revolution. For example, I think it is useful to stress the importance of the the revolution in the second half of the 19th century (sometimes called the second industrial revolution). This brought the development of the chemical industry, electrical technology, cars and aeroplanes, all of which were supported by social innovations like technical training, research universities and industrial R&D. This phase of development is distinct from the developments in the late 18th century, of factories, especially for the spinning and weaving of textiles (the so-called first industrial revolution). This distinctiveness is neglected in many British accounts of history, perhaps because Britain didn’t unquestionably lead this “second industrial revolution”, with Germany and the USA setting the pace in many areas.

But there were other industrial revolutions (see for example this discussion and alternative periodisation from Anton Howes – “How many industrial revolutions”). I’ve discussed here the importance of an earlier revolution, starting in England in the mid-17th century, of process industries like glass making, pottery, lime making, and metallurgy, fueled by the ready availability of coal. And, expanding the definition of technology, one could talk of an earlier technological revolution from the mid-15th century, with the new financial technologies of banking and accountancy driving an expansion of trade and capital investment.

So much for history, where discussions of how many industrial revolutions may be inconclusive, but are at least interesting prompts for us to go back to the evidence and argue about it. But for the present and future, announcing the onset of a new industrial revolution doesn’t constitute analysis, it can only be marketing. And indeed, we’ve seen such “new industrial revolutions” announced before – with nanotechnology, then synthetic biology, to give just two examples. Marketing is useful for those with something to sell, but is not necessarily a good basis for policy-making.

If we were to accept that we were seeing a new industrial revolution, though, what is the basis for Ferguson’s assertion that the current revolution is exponential, while previous ones were linear? In fact, the earlier revolutions were exponential too. One needs to remember that exponential growth is not something magical, it occurs any time a process has a constant fractional growth rate. John Lienhard demonstrates (in The Rate of Technological Improvement before and after the 1830s) exponential growth in a number of early technologies – steam engine efficiencies between about 1750 and 1850, accuracy of mechanical clocks between 1400 and 1900, and refrigeration from 1860 to 1940, to give three examples.

The economic outcomes of these technological revolutions were exponential too – they resulted in periods of roughly exponential economic growth. Exponential economic growth is, of course, the natural consequence of a constant positive percentage growth rate – compounding. Niall Ferguson presumably once knew this, given that he used to be a historian of finance of some repute.

So what’s exponential about the current revolution? The most important exponential in recent economic history has been Moore’s law – the constant (and very high) rate of fractional improvement in computer power which resulted from the continual miniaturisation of the components of microelectronic circuits. Transistors are still getting smaller, and computers are still getting faster, but it’s important to realise that the exponential phase of Moore’s law has now come to an end, as exponential growth in the physical world always does.

Meanwhile economic growth in the developed countries, in the countries at the technology frontier, has conspicuously stopped being exponential. Growth rates are slowing, as the productivity growth that arises from technological innovation, and which is the fundamental driving force behind economic growth, has declined across the world in the last decade.

At this point, optimists about technology begin to question whether economic measures, like GDP, can capture all the benefits that current digital technology brings. In effect, the hard-nosed capitalists of the tech industry morph into dopey hippies to argue that really, money isn’t everything. What about the value consumers gain from being able, at no marginal cost, to do free searches through all the world’s information, or to call up music and entertainment on demand?

In this, of course, they are right – GDP doesn’t measure everything that matters. But why should we think this is any different to the great innovations of the past, which themselves brought huge benefits not measured in money? This argument is forcefully made by the FT’s chief economist, Martin Wolf, in a recent article: “Same as it ever was: why the Techno-optimists are wrong”.

Technology has always brought about benefits that weren’t fully captured in GDP. Think of the non-monetary value of there not being a 4-5% chance of mothers dying through childbirth, as there was in Britain before around 1930, and weigh that up against the free entertainment possibilities of the web.

As for the new technologies being different because they change, not just what we do, but who we are, all that this illustrates is the bleeding of transhumanist rhetoric into the mainstream that I criticise in my ebook Against Transhumanism: the delusion of technological transcendence. It’s a wish that some people have, that technologies will allow them to transcend the limitations of their human nature (and most notably, the limitation of mortality). What is yet to be proven that the new technologies are any more capable of fulfilling that (admitttedly powerful) wish, than the previous ones.

Yet I remain optimistic about the potential of technology. The technological developments that underly this “fourth industrial revolution” excitement are real, though they are sometimes not as new as is being portrayed, and their effect on the broader economy so far remains disappointing. I don’t entirely accept the pessimistic case made by Tyler Cowen and others that slow technological progress is inevitable because we’ve already taken the “low hanging fruit”. The problem is that fast progress in some areas (the combination of mobile communication with large databases that constitutes the core of the so-called “tech sector”, especially) obscures, but doesn’t counteract, the glacially slow progress we are making in other areas – including areas that matter to us a great deal, like the development of sustainable, scalable energy technologies.

There is no fourth industrial revolution. Technological progress continues, in some areas it moves fast, in other areas it moves much more slowly, despite our society’s most pressing needs. Which technologies move fast, and which we neglect and allow to stagnate, are the results of the political and social choices we make, often tacitly. We might make better choices if our discussions of technology weren’t conducted entirely in terms of tired clichés.

Science, productivity and the spending review

This post appears on the blog of the Campaign for Science and Engineering, as part of a series in the run-up to the UK government’s comprehensive spending review arguing for the value of science spending. As with my earlier pieces, supporting statistics and references can be found in my submission to the BIS select committee productivity inquiry, Innovation, research, and the UK’s productivity crisis (PDF)

Stagnating productivity is one of the biggest problems the UK faces, and it’s the most compelling reason why, despite a tight fiscal climate, the science and innovation budget should be preserved (and ideally, increased).

It’s clear that the government regards deficit reduction as its highest priority, and in pursuit of that, all areas of public spending, including the science budget, are under huge pressure. But the biggest threat to the government’s commitments on deficit reduction may not be the difficulty in achieving departmental spending cuts – it is the possibility that the current slowdown in productivity growth, unprecedented in recent history, continues.

Over many decades, labour productivity in the UK (the amount of GDP produced per hour of labour input) has increased at a steady rate. After the financial crisis in 2008, that steady increase came to an abrupt halt, since when it has flat-lined, and is now at least 15% below the pre-crisis trend. The UK’s productivity performance was already weaker than competitors like the USA, and since the crisis this gap with competitors has opened up yet further. If productivity growth does not improve, the government will miss all its fiscal targets and living standards will continue to stagnate.

Productivity growth, fundamentally, arises from innovation in its broadest sense. The technological innovation that arises from research and development is a part of this, so in searching for the reasons for our weak productivity growth we should look at the UK’s weak R&D investment. This is not the only contributory factor – in recent years, the decline in North Sea oil and a decline in productivity in the financial services sector following the financial crisis provide a headwind that’s not going to go away. This means that we’ll have to boost innovation in other sectors of the economy – like manufacturing and ICT – even more, just to get back to where we were. R&D isn’t the only source of innovation in these and other sectors, but it’s the area in which the UK, compared to its competitors, has been the weakest.

The UK has, for many years, underinvested in R&D – both in the public and the private sectors – compared to traditional competitors like France, Germany and the USA. In recent years, Korea has emerged as the most R&D intensive economy in the world, while China overtook the UK in R&D intensity a few years ago. This is a global race that we’re not merely lagging behind in, we’re running in the wrong direction.

In the short term of an election cycle, it’s probably private sector R&D that has the most direct impact on productivity growth. The UK’s private sector R&D base is not only proportionately smaller than our competitors; more than half of it is done by overseas owned companies – a uniquely high proportion for such a large economy. It is a very positive sign of the perceived strength of the UK’s research base, that overseas companies are so willing to invest in R&D here. But such R&D is footloose.

Much evidence shows that public sector R&D spending “crowds in” substantial further private sector R&D. The other side of that coin is that continuing – or accelerating – the erosion of public investment in R&D that we’ve seen in recent years will lead to a loss of private sector R&D, further undermining our productivity performance. The timescale over which these changes could unfold could be uncomfortably fast.

These are the short-term consequences of the neglect of research, but the long-term effects are potentially even more important, and this is something that politicians concerned about their legacy might want to reflect on. The big problems that society faces, and that future governments will have to grapple with – running a health service with a rapidly ageing population, ensuring an affordable supply of sustainable, low carbon energy, to give just two examples – will need all the creativity and ingenuity that science, research and innovation can bring to bear.

The future is unpredictable, so there’ll undoubtedly be new problems to face, and new possibilities to exploit. A strong and diverse science base will give our society the resilience to handle these. So a government that was serious about building the long-term foundations for the continuing health and prosperity of the nation would be careful to ensure the health of the research base that will underpin those necessary innovations.

One way of thinking of our current predicament is that we’re doing an experiment to see what happens if you try to run a large economy at the technology frontier with an R&D intensity about a third smaller than key competitors. The outcome of that experiment seems to be clear. We have seen a slowdown in productivity growth that has persisted far longer than economists and the government expected, and this in turn has led to stagnating living standards and disappointing public finances. Our weak R&D performance isn’t the only cause of these problems, but it is perhaps one of the easiest factors to put right. This is an experiment we should stop now.

Innovation, research and the UK’s productivity crisis (the shorter version)

I have a much shorter version of my earlier three-part series (PDF version here) on the connection between the UK’s weak and worsening R&D performance and its current productivity standstill on HEFCE’s blog: Innovation, research and the UK’s productivity crisis.

The same piece has also been published on the blog of the Sheffield Political Economy Research Institute: Continuing on our current path of stagnating productivity and stagnating innovation isn’t inevitable: it’s a political choice, and it also appears on the web-based economics magazine Pieria.

The longer and more detailed post also formed the basis for my written evidence to the House of Commons Business Innovation and Skills Select Committee, which is currently inquiring into the productivity problem: On productivity and the government’s productivity plan (PDF).

Finally, here’s another graphical representation of the productivity problem in historical context, using the latest version of the Bank of England’s historical dataset “Three centuries of macroeconomic data”. It shows the total growth in hourly labour productivity over the preceding seven years; on this measure the current productivity slow-down is worse than that associated with two world wars and a great depression.

7yearproductivity_blog

Seven year growth in hourly labour productivity. Data from Hills, S, Thomas, R and Dimsdale, N (2015) “Three Centuries of Data – Version 2.2”, Bank of England.

The wrong direction

How does the UK compare with other leading research intensive economies, and how has its relative position changed in recent years? The graph above is an attempt to answer both questions graphically, separating out the contributions of both the public sector and the private sector to the overall R&D intensity of the economy as a proportion of GDP, and illustrating the trajectories of this expenditure since 2008. The UK stands out as having begun the period with a weak R&D performance, and since then it has gone in the wrong direction.

Govt vs Industry GERD timev2

Plotting both the private sector and public sector contributions to national R&D efforts stresses that there is a positive correlation between the two – public sector R&D tends to “crowd in” private R&D spending (1). Across the OECD on average, the private sector spends roughly twice as much on R&D as does the public sector, though in East Asian countries the private sector does more. The UK is substantially less R&D intensive than major competitors, and both public and private sectors contribute to this weak performance.

We can see some different trajectories in recent years. China and Korea stand out by their large increases in both private sector and public sector R&D intensity. Continue reading “The wrong direction”

Did the government build the iPhone? Would the iPhone have happened without governments?

The iPhone must be one of the most instantly recognisable symbols of the modern “tech economy”. So, it was an astute choice by Mariana Mazzacuto to put it at the centre of her argument about the importance of governments in driving the development of technology. Mazzacuto’s book – The Entrepreneurial State – argues that technologies like the iPhone depended on the ability and willingness of governments to take on technological risks that the private sector is not prepared to assume. She notes also that it is that same private sector which captures the rewards of the government’s risk taking. The argument is a powerful corrective to the libertarian tendencies and the glorification of the free market that is particularly associated with Silicon Valley.

Her argument could, though, be caricatured as saying that the government built the iPhone. But to put it this way would be taking the argument much too far – the contributions, not just of Apple, but of many other companies in a worldwide supply chain that have developed the technologies that the iPhone integrates, are enormous. The iPhone was made possible by the power of private sector R&D, the majority of it not in fact done by Apple, but by many companies around the world, companies that most people have probably not even heard of.

And yet, this private sector R&D was indeed encouraged, driven, and indeed sometimes funded outright, by government (in fact, more than one government – although the USA has had a major role, other governments have played their parts too in creating Apple’s global supply chain). It drew on many results from publicly funded research, in Universities and public research institutes around the world.

So, while it isn’t true to say the government built the iPhone, what is true is to say that the iPhone would not have happened without governments. We need to understand better the ways government and the private sector interact to drive innovation forward, not just to get a truer picture of where the iPhone came from, but in order to make sure we continue to get the technological innovations we want and need.

Integrating technologies is important, but innovation in manufacturing matters too

The iPhone (and the modern smartphone more generally) is, truly, an awe-inspiring integration of many different technologies. It’s a powerful computer, with an elegant and easy to use interface, it’s a mobile phone which connects to the sophisticated, computer driven infrastructure that constitutes the worldwide cellular telephone system, and through that wireless data infrastructure it provides an interface to powerful computers and databases worldwide. Many of the new applications of smartphones (as enablers, for example, of the so-called “sharing economy”) depend on the package of powerful sensors they carry – to infer its location (the GPS unit), to determine what is happening to it physically (the accelerometers), and to record images of its surroundings (the camera sensor).

Mazzacuto’s book traces back the origins of some of the technologies behind the iPod, like the hard drive and the touch screen, to government funded work. This is all helpful and salutary to remember, though I think there are two points that are underplayed in this argument.

Firstly, I do think that the role of Apple itself (and its competitors), in integrating many technologies into a coherent design supported by usable software, shouldn’t be underestimated – though it’s clear that Apple in particular has been enormously successful in finding the position that extracts maximum value from physical technologies that have been developed by others.

Secondly, when it comes to those physical technologies, one mustn’t underestimate the effort that needs to go in to turn an initial discovery into a manufacturable product. A physical technology – like a device to store or display information – is not truly a technology until it can be manufactured. To take an initial concept from an academic discovery or a foundational patent to the point at which one has a a working, scalable manufacturing process involves a huge amount of further innovation. This process is expensive and risky, and the private sector has often proved unwilling to bear these costs and risks without support from the state, in one form or another. The history of some of the many technologies that are integrated in devices like the iPhone illustrate the complexities of developing technologies to the point of mass manufacture, and show how the roles of governments and the private sector have been closely intertwined.

For example, the ultraminiaturised hard disk drive that made the original iPod possible (now largely superseded by cheaper, bigger, flash memory chips) did indeed, as pointed out by Mazzucato, depend on the Nobel prize-winning discovery by Albert Fert and Peter Grünberg of the phenomenon of giant magnetoresistance. This is a fascinating and elegant piece of physics, which suggested a new way of detecting magnetic fields with great sensitivity. But to take this piece of physics and devise a way of using it in practise to create smaller, higher capacity hard disk drives, as Stuart Parkin’s group at IBM’s Almaden Laboratory did, was arguably just as significant a contribution.

How liquid crystal displays were developed

The story of the liquid crystal display is even more complicated. Continue reading “Did the government build the iPhone? Would the iPhone have happened without governments?”

Does radical innovation best get done by big firms or little ones?

A recent blogpost by the economist Diane Coyle quoted JK Galbraith as saying in 1952: “The modern industry of a few large firms is an excellent instrument for inducing technical change. It is admirably equipped for financing technical development and for putting it into use. The competition of the competitive world, by contrast, almost completely precludes technical development.” Coyle describes this as “complete nonsense”“ big firms tend to do incremental innovation, while radical innovation tends to come from small entrants.” This is certainly conventional wisdom now – but it needs to be challenged.

As a point of historical fact, what Galbraith wrote in 1952 was correct – the great, world-changing innovations of the postwar years were indeed the products, not of lone entrepreneurs, but of the giant R&D departments of big corporations. What is true is that in recent years we’ve seen radical innovations in IT which have arisen from small entrants, of which Google’s search algorithm is the best known example. But we must remember two things. Digital innovations like these don’t exist in isolation – they only have an impact because they can operate on a technological substrate which isn’t digital, but physical. The fast, small and powerful computers, the world-wide communications infrastructure that digital innovations rely on were developed, not in small start-ups, but in large, capital intensive firms. And many of the innovations we urgently need – in areas like affordable low carbon energy, grid-scale energy storage, and healthcare for ageing populations – will not be wholly digital in character. Technologies don’t all proceed at the same pace (as I discussed in an earlier post – Accelerating change or innovation stagnation). In focusing on the digital domain, in which small entrants can indeed achieve radical innovations (as well as some rather trivial ones), we’re in danger of failing to support the innovation in the material and biological domains, which needs the long-term, well-resourced development efforts that only big organisations can mobilise. The outcome will be a further slowing of economic growth in the developed world, as innovation slows down and productivity growth stalls.

So what were the innovations that the sluggish big corporations of the post-war world delivered? Jet aircraft, antibiotics, oral contraceptives, transistors, microprocessors, Unix, optical fibre communications and mobile phones are just a few examples. Continue reading “Does radical innovation best get done by big firms or little ones?”

Growth, technological innovation, and the British productivity crisis

The biggest current issue in the UK’s economic situation is the continuing slump in productivity. It’s this poor productivity performance that underlies slow or no real wage growth, and that also contributes to disappointing government revenues and consequent slow progress reducing the government deficit. Yet the causes of this poor productivity performance are barely discussed, let alone understood. In the long-term, productivity growth is associated with innovation and technological progress – have we stopped being able to innovate? The ONS has recently released a set of statistics which potentially throw some light on the issue. These estimates of total factor productivity – productivity controlled for inputs of labour and capital – make clear the seriousness of the problem.

Multifactor productivity, whole economy, ONS estimates.
Total factor productivity relative to 1994, whole economy, ONS estimates

Here are the figures for the whole economy. They show that, up to 2008, total factor productivity grew steadily at around 1% a year. Then it precipitously fell, losing more than a decade’s worth of growth, and it continues to fall. This means that each year since the financial crisis, on average we have had to work harder or put in more capital to achieve the same level of economic output. A simple-minded interpretation of this would be that, rather than seeing technological progress being reflected in economic growth, we’re going backwards, we’re technologically regressing, and the only economic growth we’re seeing is because we have a larger population working longer hours.

Of course, things are more complicated than this. Many different sectors contribute to the economy – in some, we see substantial innovation and technological progress, while in others the situation is not so good. It’s the overall shape of the economy, the balance between growing and stagnating sectors, that contributes to the whole picture. The ONS figures do begin to break down total factor productivity growth into different sectors, and this begins to give some real insight into what’s wrong with the UK’s economy and what needs to be done to right it. Before I come to those details, I need to say something more about what’s being estimated here.

Where does sustainable, long term economic growth come from? Continue reading “Growth, technological innovation, and the British productivity crisis”

Science, Politics, and the Haldane Principle

The UK government published a new Science and Innovation Strategy just before Christmas, in circumstances that have led to a certain amount of comment (see, for example, here and here). There’s a lot to be said about this strategy, but here I want to discuss just one aspect – the document’s extended references to the Haldane Principle. This principle is widely believed to define, in UK science policy, a certain separation between politics and science, taking detailed decisions about what science to fund out of the hands of politicians and entrusting them to experts in the Research Councils, at arms’ length from the government. The new strategy reaffirms an adherence to the Haldane Principle, but it does this in a way that will make some people worry that an attempt is being made to redefine it, to allow more direct intervention in science funding decisions by politicians in Whitehall. No-one doubts that the government of the day has, not just a right, but a duty, to set strategic directions and priorities for the science the government funds. What’s at issue are how to make the best decisions, underpinned by the best evidence, for what by definition are the uncertain outcomes of research.

The key point to recognize about the Haldane Principle is that it is – as the historian David Edgerton pointed out – an invented tradition. Continue reading “Science, Politics, and the Haldane Principle”

Lecture on responsible innovation and the irresponsibility of not innovating

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.