The iPhone must be one of the most instantly recognisable symbols of the modern “tech economy”. So, it was an astute choice by Mariana Mazzacuto to put it at the centre of her argument about the importance of governments in driving the development of technology. Mazzacuto’s book – The Entrepreneurial State – argues that technologies like the iPhone depended on the ability and willingness of governments to take on technological risks that the private sector is not prepared to assume. She notes also that it is that same private sector which captures the rewards of the government’s risk taking. The argument is a powerful corrective to the libertarian tendencies and the glorification of the free market that is particularly associated with Silicon Valley.
Her argument could, though, be caricatured as saying that the government built the iPhone. But to put it this way would be taking the argument much too far – the contributions, not just of Apple, but of many other companies in a worldwide supply chain that have developed the technologies that the iPhone integrates, are enormous. The iPhone was made possible by the power of private sector R&D, the majority of it not in fact done by Apple, but by many companies around the world, companies that most people have probably not even heard of.
And yet, this private sector R&D was indeed encouraged, driven, and indeed sometimes funded outright, by government (in fact, more than one government – although the USA has had a major role, other governments have played their parts too in creating Apple’s global supply chain). It drew on many results from publicly funded research, in Universities and public research institutes around the world.
So, while it isn’t true to say the government built the iPhone, what is true is to say that the iPhone would not have happened without governments. We need to understand better the ways government and the private sector interact to drive innovation forward, not just to get a truer picture of where the iPhone came from, but in order to make sure we continue to get the technological innovations we want and need.
Integrating technologies is important, but innovation in manufacturing matters too
The iPhone (and the modern smartphone more generally) is, truly, an awe-inspiring integration of many different technologies. It’s a powerful computer, with an elegant and easy to use interface, it’s a mobile phone which connects to the sophisticated, computer driven infrastructure that constitutes the worldwide cellular telephone system, and through that wireless data infrastructure it provides an interface to powerful computers and databases worldwide. Many of the new applications of smartphones (as enablers, for example, of the so-called “sharing economy”) depend on the package of powerful sensors they carry – to infer its location (the GPS unit), to determine what is happening to it physically (the accelerometers), and to record images of its surroundings (the camera sensor).
Mazzacuto’s book traces back the origins of some of the technologies behind the iPod, like the hard drive and the touch screen, to government funded work. This is all helpful and salutary to remember, though I think there are two points that are underplayed in this argument.
Firstly, I do think that the role of Apple itself (and its competitors), in integrating many technologies into a coherent design supported by usable software, shouldn’t be underestimated – though it’s clear that Apple in particular has been enormously successful in finding the position that extracts maximum value from physical technologies that have been developed by others.
Secondly, when it comes to those physical technologies, one mustn’t underestimate the effort that needs to go in to turn an initial discovery into a manufacturable product. A physical technology – like a device to store or display information – is not truly a technology until it can be manufactured. To take an initial concept from an academic discovery or a foundational patent to the point at which one has a a working, scalable manufacturing process involves a huge amount of further innovation. This process is expensive and risky, and the private sector has often proved unwilling to bear these costs and risks without support from the state, in one form or another. The history of some of the many technologies that are integrated in devices like the iPhone illustrate the complexities of developing technologies to the point of mass manufacture, and show how the roles of governments and the private sector have been closely intertwined.
For example, the ultraminiaturised hard disk drive that made the original iPod possible (now largely superseded by cheaper, bigger, flash memory chips) did indeed, as pointed out by Mazzucato, depend on the Nobel prize-winning discovery by Albert Fert and Peter Grünberg of the phenomenon of giant magnetoresistance. This is a fascinating and elegant piece of physics, which suggested a new way of detecting magnetic fields with great sensitivity. But to take this piece of physics and devise a way of using it in practise to create smaller, higher capacity hard disk drives, as Stuart Parkin’s group at IBM’s Almaden Laboratory did, was arguably just as significant a contribution.
How liquid crystal displays were developed
The story of the liquid crystal display is even more complicated. Early work in companies like RCA highlighted the possibility of making a display using the switching properties of liquid crystals, but I would identify three key developments that led to the modern LCD screen. The first was the basic idea of a twisted nematic display, which is the basis of the mechanisms used in all modern displays. This was the essentially simultaneous invention, around 1970, of James Fergason, from Kent State University, and Martin Schadt and Wolfgang Helfrich, in the corporate laboratory of the Swiss chemical company Hoffman-LaRoche.
The second was the development of the actual chemicals – and the methods to make them – that showed the right liquid crystal behaviour. George Gray, at Hull University, made important contributions here – funding for that came from the UK Government defense electronics laboratory RSRE, who were interested in finding replacements for bulky and expensive cathode ray tubes for applications such as radar. But much development work was done by chemical companies, especially Merck – good performance depends not just on finding the right chemical compound (or indeed the right mixture of compounds), but in being able to produce it at extraordinarily high levels of purity.
Finally, the move from so-called passive matrix displays to active matrix displays, which is essential for high resolution, fast switching colour displays, needed the development of a way of creating a large areas of glass patterned with transistors to drive each pixel. Early attempts to make thin film transistors (for example the early Westinghouse work described by Mazzucato) used cadmium selenide as the semiconductor, but mass production proved difficult. The situation changed with the development by Walter Spear and Peter LeComber of the University of Dundee of ways of preparing thin films of amorphous silicon in which the defects were passivated with hydrogen and which could be controllably doped. Once again, RSRE supported this research. Displays were demonstrated using amorphous silicon thin film transistors in 1979, and then there was a rush by many companies, including IBM, Toshiba, Sharp and Hitachi, to mass production, with generation rapidly following generation with higher resolution and larger areas. The industry soon became localised in Japan, subsequently moving to Taiwan and Korea, and now increasingly China. Meanwhile, another display technology has emerged to compete with LCDs – organic light emitting diode displays, as used in Samsung phones, but that’s another story.
GPS and accelerometers
In contrast, the origins of the GPS system that lies behind the location sensing abilities of modern smartphones is much more straightforward. It is, of course, still fundamentally a US government-run infrastructure whose primary purpose remains military. The GPS chip in a smartphone integrates ultra-accurate timing, the ability to detect and process the very weak radio signals from the satellites, and computer power to calculate a location anywhere on the earth’s surface to an accuracy of meters. The original customers of such chips were, of course, the military. They integrated and miniaturised the original, cabinet-sized GPS receivers, to the point where GPS location abilities can be used in precision munitions, as well as GPS units for vehicles and individuals. Driven by these military markets, the prices and sizes of these devices fell to the point at which companies like Garmin were able to develop consumer markets for them.
There was a similar trajectory from military to civil applications for the accelerometers in an iPhone (whose most obvious effect is to switch the orientation of the display when the device is rotated). The original markets were for missiles and inertial guidance systems, but for accelerometers the breakthrough to mass markets was the development of micro-electromechanical (MEMS) based systems that could be mass-manufactured with the techniques derived from the microelectronics industry. An early civil mass market was provided by the sensors that trigger car airbags. Later they were used in laptops, to sense when the laptop had been dropped so the hard-drive could be shut down before impact.
How digital cameras became ubiquitous – the CMOS image sensor
My final example is interesting in that it derives directly from US Government investment, but was not primarily driven by the military. The technology behind the cheap camera sensors used in smartphones derive instead from NASA. Prompted by the drive to make space science missions “faster, better, cheaper”, Eric Fossum at the Jet Propulsion Laboratory came up with a solid state sensor design compatible with CMOS manufacturing processes, which could be made much more cheaply than the existing CCD technology. CMOS sensors were spun-out from JPL through Fossum’s own company, Photobit Corporation in 1995, and many other semiconductor manufacturers. Since digital photography was already growing fast by then, there would have been no doubt that CMOS sensors would have a market when the technology matured, given the potential cost advantage.
How governments have promoted technological innovation
These stories – and others like it – show the many ways in which the state can intervene and promote innovation. Here are some of those ways:
– The most obvious way is through government funding of basic research in Universities and government research institutes. The classical economic arguments about the difficulty of private actors capturing the full monetary value of this kind of research mean that there is a widespread consensus in favour of this, even amongst the most neoliberally orthodox regimes. In this view, the outcomes of scientific research constitute a kind of commons that the private sector can pick up and turn into new products, with new markets being discovered and exploited through the process of experimentation that competition between companies leads to. The difficulty with this view is that it underestimates the difficulty, expense and time of the development process. To get an idea from the stage that an academic researcher might leave it to the point where costs can be recouped with a mass market product takes much money and time, and exposes the funders to both the technical risk that the development won’t be successful at producing a product at the right price, and the market risk that, by the time the product is developed, some competing technology has taken its market away. Government support of basic research is a necessary, but not sufficient, condition for technological innovation.
-At the opposite extreme, a government can directly commission and build an entire technological system, including all the necessary research and development, as happened and is continuing to happen with GPS (the US government is currently spending a bit more than a billion dollars a year on the GPS system). Here there is a decision that a technology or technological system is a direct strategic imperative for a government, which means that the government does what it takes to make it happen. This doesn’t exclude the possibility, as happened with GPS, that the system is later opened up to wider private uses. Of course, the private sector may still be involved here as contractors, but the key factor is that the government assumes essentially all risk – both the technical risk, as the contractors will still be paid if the system doesn’t work, and the market risk, because the government itself is the customer.
– An intermediate position provides government funding for strategic R&D in support of specific government goals. This can take place in private sector laboratories – as happens in the USA, where an astonishing 63% of the R&D done by the US aerospace industry in 2010 was paid for directly by the federal government. Or it can take place in public sector laboratories, with strong private sector partnerships, like Germany’s Fraunhofer Institutes, or Taiwan’s ITRI, which was so important in creating such a strong ICT manufacturing sector there.
– Another powerful way in which governments can support the development of new technology is in providing a guaranteed market for its products. This has clearly been enormously powerful in the development of the US electronics industry, which has benefitted hugely from guaranteed military markets for the new technologies it has developed. To be effective, the implicit or explicit promise by government needs to be credible. Less effective examples of this approach can be seen in the guaranteed prices for low carbon energy we have seen recently, where that credibility has been eroded by abrupt policy changes.
– Finally. one can see a whole range of interventions that are characteristic of the East Asian development model (whose conceptual basis goes back to the policies of Alexander Hamilton and Friedrick List, applied successfully by the USA and Germany in the second half of the 19th century). These include preferential access to capital for state owned or state sponsored industries in favoured sectors, protection of infant industries, and mercantilist policies to promote exports. As argued persuasively by Joe Studwell in his book “How Asia Works”, this formula has been successfully applied by Japan, Taiwan and Korea, and is now been undertaken on an even more massive scale by China.
Governments and risk-taking
The title of Mazzucato’s book – The Entrepreneurial State – is provocative, and some will think she’s oversold her thesis. But the title focuses attention on the key issue here – risk – who it is that takes on risk, and who benefits from it. As the last discussion made clear, there are different types of risk involved in developing new technologies, including both technical risk and market risk. History shows us that the introduction of new technologies in the past has involved those risks being shared by the state and the private sector in different ways and to different degrees. New technology is risky, and the private sector (in the anglo-saxon economies) seems increasingly unwilling to take those risks on (as I discussed in my paper “The UK’s innovation deficit and how to repair it”).
As Mazzucato caustically points out, western governments are willing to assume risks on behalf of the private sector, at very substantial cost to the taxpayer when those risks go wrong, but only in the financial services sector. I don’t think these can be described either as technical risk or as market risk – stupidity and greed risk seems a more appropriate description. Moreover, our governments, through inconsistent and incoherent policy making, are able to introduce entirely new and gratuitous forms of political and regulatory risks which further inhibit the development of new technologies.
The strategic goals of States
But why should a government be interested in developing new technology at all? It depends what the long-term strategic goals of the state are. For the USA and its allies in the 1960’s and 70’s, that was absolutely clear – it was to maintain enough of a technological lead over the USSR to ensure military dominance. The development of the strong electronics and ICT sectors, was a side-effect of this primary goal, which perhaps illustrates John Kay’s principle of obliquity. For Japan, followed by Korea and Taiwan, the strategic goal was to achieve rapid GDP growth and catch-up with the technological frontier. For China now, the strategy combines both elements, seeking to develop technology to achieve both military and economic power. To speak of the UK, I now find it very difficult to discern any long term strategic goals of the government at all. Perhaps this isn’t unconnected to the country’s current stagnation in economic growth and productivity and its chronic inability to renew its infrastructure.
What should the strategic goals of governments be now? That ought to be a matter for a proper democratic debate, though there’s little sign of that happening. My own strong conviction is that one such major goal should be the development of new technologies to bring about the affordable decarbonisation of the energy economy. The history of state involvement in the development of the modern information technology industry suggests both some mechanisms by which this could be achieved, and the wider economic benefits that could be derived from the committed pursuit of such a goal.