Innovation, research, and the UK’s productivity crisis – part 3

August 27th, 2015

This the third and final in a series of three posts. The first part is here, and this follows on directly from part 2

(Added 2/9/2015: For those who dislike the 3-part blog format, the whole article can be downloaded as a PDF here: Innovation, research and development, and the UK’s productivity crisis).

Quantifying the productivity benefits of research and development

The UK’s productivity problem is an innovation problem. This conclusion follows from the analysis of Goodridge, Haskel and Wallis, at least if one equates the economist’s construction of total factor productivity with innovation. This needs some qualification, because when economists talk about innovation in this context they mean anything that allows one to produce more economic output with the same inputs of labour and capital. So this can result from the development of new high value products or new, better processes to make existing products. Such developments are often, but not always, the result of formal research and development.

But there are many other types of innovation. People continually work out better ways of doing things, either as a result of formal training or simply by learning from experience, they act on suggestions from users, they copy better practises from competitors, they see new technologies in action in other sectors and apply them in their own, they work out more effective ways of organising and distributing their work; all these lead to total factor productivity growth and count as innovation in this sense.

There has been a tendency to underplay the importance of formal research and development in recent thinking about innovation, particularly in the UK. Read the rest of this entry »

Innovation, research, and the UK’s productivity crisis – part 2

August 25th, 2015

This the second in a series of three posts, and continues directly from part 1.

Analysing the UK’s productivity slow-down

There are many theories of why the UK’s productivity growth has stalled, and in the absence of proper analysis it’s all too easy to chose a favoured hypothesis on the basis of anecdotes or a single data point, picked out to fit one’s ideological predilections. Indeed, I could be accused of doing just that, by drawing attention the UK’s weak R&D record; others might immediately start looking at a lack of competitiveness in the economy, or insufficient deregulation, as the root of the issue. But it would be surprising if such a striking occurrence had just a single cause, so a more careful analysis should help us not just by ruling possible causes in or out, but by ascribing different weights to multiple causes.

A better analysis needs both to consider what we mean by productivity and its different causes in more detail, and to look at the economy on a finer scale, looking both at the productivity performance of different sectors and the balance in the economy between those different sectors. Read the rest of this entry »

Innovation, research and development, and the UK’s productivity crisis – part 1

August 24th, 2015

This is the first of a series of three posts in which I bring together some thinking and reading I’ve been doing about the UK’s current productivity problem, and its relationship to innovation and to research and development.

(Added 2/9/2015: For those who dislike the 3-part blog format, the whole article can be downloaded as a PDF here: Innovation, research and development, and the UK’s productivity crisis)

In part 1, here, I take stock of the scale of the UK’s productivity problem and discuss why it matters so much, both economically and politically. Then I’ll set the context for the following discussion with a provocative association between productivity growth and R&D intensity.

In part 2, I’ll review what can be said with more careful analyses of productivity, looking at the performance of individual sectors and testing some more detailed explanations of the productivity slowdown. I’ll pick out the role of declining North Sea oil and gas and the end of the financial services bubble in the UK’s poor recent performance; these don’t explain all the problem, but they will provide a headwind that the economy will have to overcome over the coming years.

Finally in part 3 I’ll return to a more detailed discussion of innovation in general and the particular role of R&D, finishing with some thoughts about what should be done about the problem.

The scale of the UK’s productivity problem

The UK’s current stalling of productivity growth is probably the UK’s most serious current economic problem. In terms of output per hour, the last five years’ productivity performance has been by far the worst period in the last 45 years. Many other developed economies have had disappointing productivity growth in recent years, but the UK’s record is particularly bad. Amongst other developed economics, only Luxembourg and Greece have done worse since 2007, according to a recent OECD report on the future of productivity (see table A2, p83).

labourproductivity
Labour productivity since 1970. The fit is an exponential corresponding to constant growth of 2.3% a year. ONS data.

My plot shows the UK’s labour productivity – defined as the GDP generated per hour worked – since 1971. Read the rest of this entry »

I chose to graduate

August 20th, 2015

I’m sure there are some people who, very early on in their lives, work out what they want to do and then set out single-mindedly to achieve their aims. For the rest of us, choices are made and paths are set without us really being conscious of those junctions, so we look back and wonder how was it that our lives unfolded in this way and not in another. And yet, looking back, we sometimes can see moments, or short periods, that were decisive in setting us down one path and cutting off other possibilities. For me, the summer of 1982 was the time that determined that I was going to end up being a scientist, and to some extent what sort of scientist I would end up being, though I don’t suppose at any moment at the time I had any idea that this was the case.

Mont Blanc du Tacul

The East Face of Mont Blanc du Tacul, a 4,248 m peak in the French Alps

It began on the bus to Chamonix, in the French Alps, in the summer vacation after my second year at University. Read the rest of this entry »

Did the government build the iPhone? Would the iPhone have happened without governments?

July 3rd, 2015

The iPhone must be one of the most instantly recognisable symbols of the modern “tech economy”. So, it was an astute choice by Mariana Mazzacuto to put it at the centre of her argument about the importance of governments in driving the development of technology. Mazzacuto’s book – The Entrepreneurial State – argues that technologies like the iPhone depended on the ability and willingness of governments to take on technological risks that the private sector is not prepared to assume. She notes also that it is that same private sector which captures the rewards of the government’s risk taking. The argument is a powerful corrective to the libertarian tendencies and the glorification of the free market that is particularly associated with Silicon Valley.

Her argument could, though, be caricatured as saying that the government built the iPhone. But to put it this way would be taking the argument much too far – the contributions, not just of Apple, but of many other companies in a worldwide supply chain that have developed the technologies that the iPhone integrates, are enormous. The iPhone was made possible by the power of private sector R&D, the majority of it not in fact done by Apple, but by many companies around the world, companies that most people have probably not even heard of.

And yet, this private sector R&D was indeed encouraged, driven, and indeed sometimes funded outright, by government (in fact, more than one government – although the USA has had a major role, other governments have played their parts too in creating Apple’s global supply chain). It drew on many results from publicly funded research, in Universities and public research institutes around the world.

So, while it isn’t true to say the government built the iPhone, what is true is to say that the iPhone would not have happened without governments. We need to understand better the ways government and the private sector interact to drive innovation forward, not just to get a truer picture of where the iPhone came from, but in order to make sure we continue to get the technological innovations we want and need.

Integrating technologies is important, but innovation in manufacturing matters too

The iPhone (and the modern smartphone more generally) is, truly, an awe-inspiring integration of many different technologies. It’s a powerful computer, with an elegant and easy to use interface, it’s a mobile phone which connects to the sophisticated, computer driven infrastructure that constitutes the worldwide cellular telephone system, and through that wireless data infrastructure it provides an interface to powerful computers and databases worldwide. Many of the new applications of smartphones (as enablers, for example, of the so-called “sharing economy”) depend on the package of powerful sensors they carry – to infer its location (the GPS unit), to determine what is happening to it physically (the accelerometers), and to record images of its surroundings (the camera sensor).

Mazzacuto’s book traces back the origins of some of the technologies behind the iPod, like the hard drive and the touch screen, to government funded work. This is all helpful and salutary to remember, though I think there are two points that are underplayed in this argument.

Firstly, I do think that the role of Apple itself (and its competitors), in integrating many technologies into a coherent design supported by usable software, shouldn’t be underestimated – though it’s clear that Apple in particular has been enormously successful in finding the position that extracts maximum value from physical technologies that have been developed by others.

Secondly, when it comes to those physical technologies, one mustn’t underestimate the effort that needs to go in to turn an initial discovery into a manufacturable product. A physical technology – like a device to store or display information – is not truly a technology until it can be manufactured. To take an initial concept from an academic discovery or a foundational patent to the point at which one has a a working, scalable manufacturing process involves a huge amount of further innovation. This process is expensive and risky, and the private sector has often proved unwilling to bear these costs and risks without support from the state, in one form or another. The history of some of the many technologies that are integrated in devices like the iPhone illustrate the complexities of developing technologies to the point of mass manufacture, and show how the roles of governments and the private sector have been closely intertwined.

For example, the ultraminiaturised hard disk drive that made the original iPod possible (now largely superseded by cheaper, bigger, flash memory chips) did indeed, as pointed out by Mazzucato, depend on the Nobel prize-winning discovery by Albert Fert and Peter Grünberg of the phenomenon of giant magnetoresistance. This is a fascinating and elegant piece of physics, which suggested a new way of detecting magnetic fields with great sensitivity. But to take this piece of physics and devise a way of using it in practise to create smaller, higher capacity hard disk drives, as Stuart Parkin’s group at IBM’s Almaden Laboratory did, was arguably just as significant a contribution.

How liquid crystal displays were developed

The story of the liquid crystal display is even more complicated. Read the rest of this entry »

On Singularities, mathematical and metaphorical

June 20th, 2015

Transhumanists look forward to a technological singularity, which we should expect to take place on or around 2045, if Ray Kurzweil is to be relied on. The technological singularity is described as something akin to an event horizon, a date at which technological growth becomes so rapid that to look beyond it becomes quite unknowable to us mere cis-humans. In some versions this is correlated with the time when, due to the inexorable advance of Moore’s Law, machine intelligence surpasses human intelligence and goes into a recursive cycle of self-improvement.

The original idea of the technological singularity is usually credited to the science fiction writer Vernor Vinge, though earlier antecedents can be found, for example in the writing of the British Marxist scientist J.D. Bernal. Even amongst transhumanists and singularitarianists there are different views about what might be meant by the singularity, but I don’t want to explore those here. Instead, I note this – when we talk of the technological singularity we’re using a metaphor, a metaphor borrowed from mathematics and physics. It’s the Singularity as a metaphor that I want to probe in this post.

A real singularity happens in a mathematical function, where for some value of the argument the result of the function is undefined. Read the rest of this entry »

Does transhumanism matter?

April 7th, 2015

The political scientist Francis Fukuyama once identified transhumanism as the “the world’s most dangerous idea”. Perhaps a handful of bioconservatives share this view, but I suspect few others do. After all, transhumanism is hardly part of the mainstream. It has a few high profile spokesmen, and it has its vociferous adherents on the internet, but that’s not unusual. The wealth, prominence, and technical credibility of some of its sympathisers – drawn from the elite of Silicon Valley – does, though, differentiate transhumanism from the general run of fringe movements. My own criticisms of transhumanism have focused on the technical shortcomings of some of the key elements of the belief package – especially molecular nanotechnology, and most recently the idea of mind uploading. I fear that my critique hasn’t achieved much purchase. To many observers with some sort of scientific background, even those who share some of my scepticism of the specifics, the worst one might say about transhumanism is that it is mostly harmless, perhaps over-exuberant in its claims and ambitions, but beneficial in that it promotes a positive image of science and technology.

But there is another critique of transhumanism, which emphasises not the distance between transhumanism’s claims and what is technologically plausible, as I have done, but the continuity between the way transhumanists talk about technology and the future and the way these issues are talked about in the mainstream. In this view, transhumanism matters, not so much for its strange ideological roots and shaky technical foundations, but because it illuminates some much more widely held, but pathological, beliefs about technology. The most persistent proponent of this critique is Dale Carrico, whose arguments are summarised in a recent article, Futurological Discourses and Posthuman Terrains (PDF). Although Carrico looks at transhumanism from a different perspective from me, the perspective of a rhetorician rather than an experimental scientist, I find his critique deserving of serious attention. For Carrico, transhumanism distorts the way we think about technology, it contaminates the way we consider possible futures, and rather than being radical it is actually profoundly conservative in the way in which it buttresses existing power structures.

Carrico’s starting point is to emphasise that there is no such thing as technology, and as such it makes no sense to talk about whether one is “for” or “against” technology. On this point, he is surely correct; as I’ve frequently written before, technology is not a single thing that is advancing at a single rate. There are many technologies, some are advancing fast, some are neglected and stagnating, some are going backwards. Nor does it make sense to say that technology is by itself good or bad; of the many technologies that exist or are possible, some are useful, some not. Or to be more precise, some technologies may be useful to some groups of people, they may be unhelpful to other groups of people, or their potential to be helpful to some people may not be realised because of the political and social circumstances we find ourselves in. Read the rest of this entry »

Does radical innovation best get done by big firms or little ones?

March 5th, 2015

A recent blogpost by the economist Diane Coyle quoted JK Galbraith as saying in 1952: “The modern industry of a few large firms is an excellent instrument for inducing technical change. It is admirably equipped for financing technical development and for putting it into use. The competition of the competitive world, by contrast, almost completely precludes technical development.” Coyle describes this as “complete nonsense” -“ big firms tend to do incremental innovation, while radical innovation tends to come from small entrants.” This is certainly conventional wisdom now – but it needs to be challenged.

As a point of historical fact, what Galbraith wrote in 1952 was correct – the great, world-changing innovations of the postwar years were indeed the products, not of lone entrepreneurs, but of the giant R&D departments of big corporations. What is true is that in recent years we’ve seen radical innovations in IT which have arisen from small entrants, of which Google’s search algorithm is the best known example. But we must remember two things. Digital innovations like these don’t exist in isolation – they only have an impact because they can operate on a technological substrate which isn’t digital, but physical. The fast, small and powerful computers, the world-wide communications infrastructure that digital innovations rely on were developed, not in small start-ups, but in large, capital intensive firms. And many of the innovations we urgently need – in areas like affordable low carbon energy, grid-scale energy storage, and healthcare for ageing populations – will not be wholly digital in character. Technologies don’t all proceed at the same pace (as I discussed in an earlier post – Accelerating change or innovation stagnation). In focusing on the digital domain, in which small entrants can indeed achieve radical innovations (as well as some rather trivial ones), we’re in danger of failing to support the innovation in the material and biological domains, which needs the long-term, well-resourced development efforts that only big organisations can mobilise. The outcome will be a further slowing of economic growth in the developed world, as innovation slows down and productivity growth stalls.

So what were the innovations that the sluggish big corporations of the post-war world delivered? Jet aircraft, antibiotics, oral contraceptives, transistors, microprocessors, Unix, optical fibre communications and mobile phones are just a few examples. Read the rest of this entry »

Growth, technological innovation, and the British productivity crisis

January 28th, 2015

The biggest current issue in the UK’s economic situation is the continuing slump in productivity. It’s this poor productivity performance that underlies slow or no real wage growth, and that also contributes to disappointing government revenues and consequent slow progress reducing the government deficit. Yet the causes of this poor productivity performance are barely discussed, let alone understood. In the long-term, productivity growth is associated with innovation and technological progress – have we stopped being able to innovate? The ONS has recently released a set of statistics which potentially throw some light on the issue. These estimates of total factor productivity – productivity controlled for inputs of labour and capital – make clear the seriousness of the problem.

Multifactor productivity, whole economy, ONS estimates.

Total factor productivity relative to 1994, whole economy, ONS estimates


Here are the figures for the whole economy. They show that, up to 2008, total factor productivity grew steadily at around 1% a year. Then it precipitously fell, losing more than a decade’s worth of growth, and it continues to fall. This means that each year since the financial crisis, on average we have had to work harder or put in more capital to achieve the same level of economic output. A simple-minded interpretation of this would be that, rather than seeing technological progress being reflected in economic growth, we’re going backwards, we’re technologically regressing, and the only economic growth we’re seeing is because we have a larger population working longer hours.

Of course, things are more complicated than this. Many different sectors contribute to the economy – in some, we see substantial innovation and technological progress, while in others the situation is not so good. It’s the overall shape of the economy, the balance between growing and stagnating sectors, that contributes to the whole picture. The ONS figures do begin to break down total factor productivity growth into different sectors, and this begins to give some real insight into what’s wrong with the UK’s economy and what needs to be done to right it. Before I come to those details, I need to say something more about what’s being estimated here.

Where does sustainable, long term economic growth come from? Read the rest of this entry »

Science, Politics, and the Haldane Principle

January 5th, 2015

The UK government published a new Science and Innovation Strategy just before Christmas, in circumstances that have led to a certain amount of comment (see, for example, here and here). There’s a lot to be said about this strategy, but here I want to discuss just one aspect – the document’s extended references to the Haldane Principle. This principle is widely believed to define, in UK science policy, a certain separation between politics and science, taking detailed decisions about what science to fund out of the hands of politicians and entrusting them to experts in the Research Councils, at arms’ length from the government. The new strategy reaffirms an adherence to the Haldane Principle, but it does this in a way that will make some people worry that an attempt is being made to redefine it, to allow more direct intervention in science funding decisions by politicians in Whitehall. No-one doubts that the government of the day has, not just a right, but a duty, to set strategic directions and priorities for the science the government funds. What’s at issue are how to make the best decisions, underpinned by the best evidence, for what by definition are the uncertain outcomes of research.

The key point to recognize about the Haldane Principle is that it is – as the historian David Edgerton pointed out – an invented tradition. Read the rest of this entry »