Nuclear vs Solar

One slightly dispiriting feature of the current environmental movement is the sniping between “old” environmentalists, opposed to nuclear power, and “new” environmentalists who embrace it, about the relative merits of nuclear and solar as low carbon energy sources. Here’s a commentary on that dispute, in the form of a pair of graphs. In fact, it’s two versions of one graph, showing the world consumption of low carbon energy from solar, nuclear and wind over the last forty years or so, the data taken from the BP Statistical Review of World Energy 2013.

nuclear vs solar lin graph

The first graph is the case for nuclear. Only nuclear energy makes any dent at all in the world’s total energy consumption (about 22500 TWh of electricity in total was generated in the world in 2012, with more energy consumed directly as oil and gas). Although nuclear generation has dropped off significantly in the last year or two following the Fukushima accident, the experience of the 1970’s and 80’s shows that it is possible to add significant capacity in a reasonable timescale. Nuclear provides the world with a significant amount of low-carbon energy that it’s foolish to imagine can be quickly replaced by renewables.

nuclear vs solar log graph

The second graph is the case for solar. It is the same graph as the first one, but with a logarithmic axis (on this plot constant fractional growth shows up as an increasing straight-line). This shows that world solar energy consumption is increasing at a greater than exponential rate. For the last five years, solar energy consumption has been growing at a rate of 66% a year compounded. (Wind-power is also growing exponentially, but currently at a slower rate than solar). Although in absolute terms, solar energy is only now at the stage that nuclear was in 1971, its growth rate now is much higher than the maximum growth rate for nuclear in the period of its big build out, which was 30% a year compounded in the five years to 1975. And even before Fukushima, the growth in nuclear energy was stagnating, as new nuclear build only just kept up with the decommissioning of the first generation of nuclear plants. Looking at this graph, solar overtaking nuclear by 2020 doesn’t seem an unreasonable extrapolation.

The case for pessimism is made by Roger Pielke, who points out, from the same data set, that the process of decarbonising the world’s energy supply is essentially stagnating, with the proportion of energy consumption from low carbon sources reaching a high point of 13.3% in 1999, from which it has very gently declined.

Of course, looking backwards at historical energy consumption figures can only take us so far in understanding what’s likely to happen next. For that, we need to look at likely future technical developments and at the economic environment. There is a lot of potential for improvement in both these technologies; not enough research and development has been done on any kind of energy technology in the last few years, as I discussed here before – We sold out our energy future.

On the economics, it has to be stressed that the progress we’ve seen with both nuclear and solar has been the result of large-scale state action. In the case of solar, subsidies in Europe have driven installations, while subsidised capital in China has allowed it rapidly to build up a large solar panel manufacturing industry. The nuclear industry has everywhere been closely tied up with the state, with fairly opaque finances.

But one thing sets apart nuclear and solar. The cost of solar power has been steadily falling, with the prospect of grid parity – the moment when solar generated electricity is cheaper than electricity from the grid – imminent in favoured parts of the world, as discussed in a recent FT Analysis article (£). This provides some justification for the subsidies – usually, with any technology, the more you make of something, the cheaper it becomes; solar shows just such a positive learning curve.

For nuclear, on the other hand, the more we install, the costlier it seems to get. Even in France, widely perceived to have been the most effective nuclear building program, with widespread standardisation and big economies of scale, analysis shows that the learning curve is negative, according to this study by Grubler in Energy Policy (£).

What is urgent now is to get the low-carbon fraction of our energy supply growing again. My own view is that this will require new nuclear build, even if only to replace the obsolete plants now being decommissioned. But for nuclear new build to happen at any scale we need to understand and reverse nuclear’s negative learning curve, and learn how to build nuclear plants cheaply and safely. And while the current growth rate of solar is impressive, we need to remember what a low base it is starting from, and continue to innovate, so that the growth rate can continue to the point at which solar is making a significant contribution.

Decelerating change in the pharmaceutical industry

Medical progress will have come to a complete halt by the year 2329. I reach this anti-Kurzweilian conclusion from a 2012 paper – Diagnosing the decline in pharmaceutical R&D efficiency – which demonstrates that, far from showing an accelerating rate of innovation, the pharmaceutical industry has for the last 60 years been seeing exponentially diminishing returns on its research and development effort. At the date of the anti-singularity, the cost of developing a single new drug will have exceeded the world’s total economic output. The extrapolation is ludicrous, of course, but the problem is not. By 2010 it took an average of $2.17 billion in R&D spending to introduce a single new drug, including the cost of all the failures. This cost per new drug has been following a kind of reverse Moore’s law, increasing exponentially in real terms at a rate of 7.6% a year since 1950, corresponding to a doubling time of a bit more than 9 years (see this plot from the paper cited above). This trend is puzzling – our knowledge of life sciences has been revolutionised during this period, while the opportunities provided by robotics and IT, allowing approaches like rapid throughput screening and large scale chemoinformatics, have been eagerly seized on by the industry. Despite all this new science and enabling technology, the anti-Moore’s law trend of diminishing R&D returns continues inexorably.

This should worry us. The failure to find effective therapies for widespread and devastating conditions – Alzheimer’s, to take just one example – leads to enormous human suffering. The escalating cost of developing new drugs is ultimately passed on to society through their pricing, leading to strains on national healthcare systems that will become more acute as populations age. As a second-order effect, scientists should be concerned in case the drying up of medical innovation casts doubt on some of the justifications for government spending on fundamental life sciences research. And, of course, a healthy and innovative pharmaceutical industry is itself important for economic growth, particularly here in the UK, where it remains the one truly internationally competitive high technology sector of the economy. So what can be done to speed up innovation in this vital sector? Continue reading “Decelerating change in the pharmaceutical industry”

Innovation policy and long term economic growth in the UK – a story in four graphs

I have a post up on the blog of the Sheffield Political Economy Research Institute – The failures of supply side innovation policy – discussing the connection between recent innovation policy in the UK and our current crisis of economic growth. Rather than cross-posting it here, I tell the same story in four graphs.

1. The UK’s current growth crisis follows a sustained period of national disinvestment in R&D

GDP and GERD

Red, left axis. The percentage deviation of real GDP per person from the 1948-1979 trend line, corresponding to 2.57% annual growth. Sources: solid line, 2012 National Accounts. Dotted line, March 2013 estimates from the Office for Budgetary Responsibility.
Blue, right axis. Total R&D intensity, all sectors, as percentage of GDP. Data: Eurostat.

Continue reading “Innovation policy and long term economic growth in the UK – a story in four graphs”

Fulfilling the promises of emerging biotechnologies

At the end of last year, the Nuffield Foundation for Bioethics published a report on the ethics of emerging biotechnologies, called Emerging Biotechnologies: technology, choice and the public good. I was on the working party for that report, and this piece reflects a personal view about some of its findings. A shorter version was published in Research Fortnight (subscription required).

In a speech at the Royal Society last November George Osborne said that, as Chancellor of the Exchequer, it is his job “to focus on the economic benefits of scientific excellence”. He then listed eight key technologies that he challenged the scientific community in Britain to lead the world in, and for which he promised continuing financial support. Among these technologies were synthetic biology, regenerative medicine and agri-science, key examples of what a recent report from the Nuffield Council for Bioethics calls emerging biotechnologies. Picking technology winners is clearly high on the UK science policy agenda, and this kind of list will increasingly inform the science funding choices the government and its agencies, like the research councils, make. So the focus of the Nuffield’s report, on how those choices are made and what kind of ethics should guide them, couldn’t be more timely.

These emerging technologies are not short of promises. According to Osborne, synthetic biology will have an £11 billion market by 2016 producing new medicines, biofuels and food – “they say that synthetic biology will heal us, heat and feed us.” Continue reading “Fulfilling the promises of emerging biotechnologies”

We sold out our energy future

Everyone should know that the industrial society we live in depends on access to plentiful, convenient, cheap energy – the last two hundred years of rapid economic growth has been underpinned by the large scale use of fossil fuels. And everyone should know that the effect of burning those fossil fuels has been to markedly increase the carbon dioxide content of the atmosphere, resulting in a changing climate, with potentially dangerous but still uncertain consequences. But a transition from fossil fuels to low carbon sources of energy isn’t going to take place quickly; existing low carbon energy sources are expensive and difficult to scale up. So rather than pushing on with the politically difficult, slow and expensive business of deploying current low carbon energy sources, why don’t we wait until technology brings us a new generation of cheaper and more scalable low carbon energy? Presumably, one might think, since we’ve known about these issues for some time, we’ve been spending the last twenty years energetically doing research into new energy technologies?

Alas, no. As my graph shows, the decade from 1980 saw a worldwide decline in the fraction of GDP major industrial countries devoted to government funded energy research, development, and demonstration, with only Japan sustaining anything like its earlier intensity of energy research into the 1990s. It was only in the second half of the decade after 2000 that we began to see a recovery, though in the UK and the USA a rapid upturn following the 2007 financial crisis has fallen away again. A rapid post-2000 growth of energy RD&D in Korea is an exception to the general picture. There’s a good discussion of the situation in the USA in a paper by Kamman and Nemet – Reversing the incredible shrinking energy R&D budget. But the largest fall by far was in the UK, where at its low point, the fraction of national resource devoted to energy RD&D fell, in 2003, to an astonishing 0.2% of its value at the 1981 high point.

Government spending on energy research, development and demonstration
Government spending on energy research, development and demonstration. Data: International Energy Authority

Continue reading “We sold out our energy future”

Do materials even have genomes?

I’ve long suspected that physical scientists have occasional attacks of biology envy, so I suppose I shouldn’t be surprised that the US government announced last year the “Materials Genome Initiative for Global Competiveness”. Its aim is to “discover, develop, manufacture, and deploy advanced materials at least twice as fast as possible today, at a fraction of the cost.” There’s a genuine problem here – for people used to the rapid pace of innovation in information technology, the very slow rate at which new materials are taken up in new manufactured products is an affront. The solution proposed here is to use those very advances in information technology to boost the rate of materials innovation, just as (the rhetoric invites us to infer) the rate of progress in biology has been boosted by big data driven projects like the human genome project.

There’s no question that many big problems could be addressed by new materials. Continue reading “Do materials even have genomes?”

Geek power?

Mark Henderson’s book “The Geek Manifesto” was part of my holiday reading, and there’s a lot to like in it – there’s all too much stupidity in public life, and anything that skewers a few of the more egregious recent examples of this in such a well-written and well-informed way must be welcomed. There is a fundamental lack of seriousness in our public discourse, a lack of respect for evidence, a lack of critical thinking. But to set against many excellent points of detail, the book is built around one big idea, and it’s that idea that I’m less keen on. This is the argument – implicit in the title – that we should try to construct some kind of identity politics based around those of us who self-identify as being interested in and informed about science – the “geeks”. I’m not sure that this is possible, but even if it was, I think it would be bad for science and bad for politics. This isn’t to say that public life wouldn’t be better if more people with a scientific outlook had a higher profile. One very unwelcome feature of public debate is the prevalence of wishful thinking. Comfortable beliefs that fit into people’s broader world-views do need critical examination, and this often needs the insights of science, particularly the discipline that comes from seeing whether the numbers add up. But science isn’t the only source of the insights needed for critical thinking, and scientists can have some surprising blind-spots, not just about the political, social and economic realities of life, but also about technical issues outside their own fields of interest.

But first, who are these geeks who Henderson thinks should organise? Continue reading “Geek power?”

The UK’s thirty year experiment in innovation policy

In 1981 the UK was one of the world’s most research and development intensive economies, with large scale R&D efforts being carried out in government and corporate laboratories in many sectors. Over the thirty years between then and now, this situation has dramatically changed. A graph of the R&D intensity of the national economy, measured as the fraction of GDP spent on research and development, shows a long decline through the 1980’s and 1990’s, with some levelling off from 2000 or so. During this period the R&D intensity of other advanced economies, like Japan, Germany, the USA and France, has increased, while in fast developing countries like South Korea and China the growth in R&D intensity has been dramatic. The changes in the UK were in part driven by deliberate government policy, and in part have been the side-effects of the particular model of capitalism that the UK has adopted. Thirty years on, we should be asking what the effects of this have been on our wider economy, and what we should do about it.

A comparison of gross research and development expenditure of various countries from 1981 to 2010
Gross expenditure on research and development as a % of GDP from 1981 to 2010. Data from Eurostat.

The second graph breaks down where R&D takes place. The largest fractional fall has been in research in government establishments, which has dropped by more than 60%. The largest part of this fall took place in the early part of the period, under a series of Conservative governments. This reflects a general drive towards a smaller state, a run-down of defence research, and the privatisation of major, previously research intensive sectors such as energy. However, it is clear that privatisation didn’t lead to a transfer of the associated R&D to the business sector. It is in the business sector that the largest absolute drop in R&D intensity has taken place – from 1.48% of GDP to 1.08%. Cutting government R&D didn’t lead to increases in private sector R&D, contrary to the expectations of free marketeers who think the state “crowds out” private spending. Instead the business climate of the time, with a drive to unlock “shareholder value” in the short-term, squeezed out longer term investments in R&D. Some seek to explain this drop in R&D intensity in terms of a change in the sectoral balance of the UK economy, away from manufacturing and towards financial services, and this is clearly part of the picture. However, I wonder whether this should be thought of not so much as an explanation, but more as a symptom. I’ve discussed in an earlier post the suggestion that “bad capitalism” – for example, speculations in financial and property markets ,with the downside risk being shouldered by the tax-payer – squeezes out genuine innovation.

UK R&D as % of GDP by sector of performance from 1981 to 2010
UK R&D as % of GDP by sector of performance from 1981 to 2010. Data from Eurostat.

The Labour government that came to power in 1997 did worry about the declining R&D intensity of the UK economy, and, in its Science Investment Framework 2004-2014 (PDF), set about trying to reverse the trend. This long-term policy set a target of reaching an overall R&D intensity of 2.5% by 2014, and an increase in R&D intensity in the business sector from to 1.7%. The mechanisms put in place to achieve this included a period of real-terms increase in R&D spending by government, some tax incentives for business R&D, and a new agency for nearer term research in collaboration with business, the Technology Strategy Board. In the event, the increases in government spending on R&D did lead to some increase in the UK’s overall research intensity, but the hoped-for increase in business R&D simply did not happen.

This isn’t predominantly a story about academic science, but it provides a context that’s important to appreciate for some current issues in science policy. Over the last thirty years, the research intensity of the UK’s university sector has increased, from 0.32% of GDP to 0.48% of GDP. This reflects, to some extent, real-term increases in government science budgets, together with the growing success of universities in raising research funds from non UK-government sources. The resulting R&D intensity of the UK HE sector is at the high end of international comparisons (the corresponding figures for Germany, Japan, Korea and the USA are 0.45%, 0.4%, 0.37% and 0.36%). But where the UK is very much an outlier is in the proportion of the country’s research that takes place in universities. This proportion now stands at 26%, which is much higher than international competitors (again, we can compare with Germany, Japan, Korea and the USA, where the proportions are 17%, 12%, 11% and 13%), and much higher now than it has been historically (in 1981 it was 14%). So one way of interpreting the pressure on universities to demonstrate the “impact” of their research, which is such a prominent part of the discourse in UK science policy at the moment, is as a symptom of the disproportionate importance of university research in the overall national R&D picture. But the high proportion of UK R&D carried out in universities is as much a measure of the weakness of the government and corporate applied and strategic research sectors as the strength of its HE research enterprise. The worry, of course, has to be that, given the hollowed-out state of the business and government R&D sectors, where in the past the more applied research needed to convert ideas into new products and services was done, universities won’t be able to meet the expectations being placed on them.

To return to the big picture, I’ve seen surprisingly little discussion of the effects on the UK economy of this dramatic and sustained decrease in research intensity. Aside from the obvious fact that we’re four years into an economic slump with no apparent prospect of rapid recovery, we know that the UK’s productivity growth has been unimpressive, and the lack of new, high tech companies that grow fast to a large scale is frequently commented on – where, people ask, is the UK’s Google? We also know that there are urgent unmet needs that only new innovation can fulfil – in healthcare, in clean energy, for example. Surely now is the time to examine the outcomes of the UK’s thirty year experiment in innovation theory.

Finally, I think it’s worth looking at these statistics again, because they contradict the stories we tell about ourselves as a country. We think of our postwar history as characterised by brilliant invention let down by poor exploitation, whereas the truth is that the UK, in the thirty post-war years, had a substantial and successful applied research and development enterprise. We imagine now that we can make our way in the world as a “knowledge economy”, based on innovation and brain-power. I know that innovation isn’t always the same as research and development, but it seems odd that we should think that innovation can be the speciality of a nation which is substantially less intensive in research and development than its competitors. We should worry instead that we’re in danger of condemning ourselves to being a low innovation, low productivity, low growth economy.

When technologies can’t evolve

In what way, and on what basis, should we attempt to steer the development of technology? This is the fundamental question that underlies at least two discussions that I keep coming back to here – how to do industrial policy and how to democratise science. But some would simply deny the premise of these discussions, and argue that technology can’t be steered, and that the market is the only effective way of incorporating public preferences into decisions about technology development. This is a hugely influential point of view which goes with the grain of the currently hegemonic neo-liberal, free market dominated world-view. It originates in the arguments of Friedrich Hayek against the 1940’s vogue for scientific planning, it incorporates Michael Polanyi’s vision of an “independent republic of science”, and it fits the view of technology as an autonomous agent which unfolds with a logic akin to that of Darwinian evolution – what one might called the “Wired” view of the world, eloquently expressed in Kevin Kelly’s recent book “What Technology Wants”. It’s a coherent, even seductive, package of beliefs; although I think it’s fatally flawed, it deserves serious examination.

Hayek’s argument against planning (his 1945 article The Use of Knowledge in Society makes this very clearly) rests on two insights. Firstly, he insists that the relevant knowledge that would underpin the rational planning of an economy or a society isn’t limited to scientific knowledge, and must include the tacit, unorganised knowledge of people who aren’t experts in the conventional sense of the word. This kind of knowledge, then, can’t rest solely with experts, but must be dispersed throughout society. Secondly, he claims that the most effective – perhaps the only – way in which this distributed knowledge can be aggregated and used is through the mechanism of the market. If we apply this kind of thinking to the development of technology, we’re led to the idea that technological development would happen in the most effective way if we simply allow many creative entrepreneurs to try different ways of combining different technologies and to develop new ones on the basis of existing scientific knowledge and what developments of that knowledge they are able to make. When the resulting innovations are presented to the market, the ones that survive will, by definition, the ones that best meet human needs. Stated this way, the connection with Darwinian evolution is obvious.

One objection to this viewpoint is essentially moral in character. The market certainly aggregates the preferences and knowledge of many people, but it necessarily gives more weight to the views of people with more money, and the distribution of money doesn’t necessarily coincide with the distribution of wisdom or virtue. Some free market enthusiasts simply assert the contrary, following Ayn Rand. There are, though, some much less risible moral arguments in favour of free markets which emphasise the positive virtues of pluralism, and even those opponents of libertarianism who point to the naivety of believing that this pluralism can be maintained in the face of highly concentrated economic and political power need to answer important questions about how pluralism can be maintained in any alternative system.

What should be less contentious than these moral arguments is an examination of the recent history of technological innovation. This shows that the technologies that made the modem world – in all their positive and negative aspects – are largely the result of the exercise of state power, rather than of the free enterprise of technological entrepreneurs. New technologies were largely driven by large scale interventions by the Warfare States that dominated the twentieth century. The military-industrial complexes of these states began long before Eisenhower popularised this name, and existed not just in the USA, but in Wilhelmine and Nazi Germany, in the USSR, and in the UK (David Edgerton’s “Warfare State: Britain 1920- 1970” gives a compelling reinterpretation of modern British history in these terms). At the beginning of the century, for example, the Haber-Bosch process for fixing nitrogen was rapidly industrialised by the German chemical company BASF. It’s difficult to think of a more world-changing innovation – more than half the world’s population wouldn’t now be here if it hadn’t been for the huge growth in agricultural productivity that artificial fertilisers made possible. However, the importance of this process for producing the raw materials for explosives ensured that the German state took much more than a spectator’s role. Vaclav Smil, in his book Enriching the Earth, quotes an estimate for the development cost of the Haber-Bosch process of US$100 million at 1919 prices (roughly US$1 billion in current money, equating to about $19 billion in terms of its share of the economy at the time), of which about half came from the government. Many more recent examples of state involvement in innovation are cited in Mariana Mazzucato’s pamphlet The Entrepreneurial State. Perhaps one of the most important stories is the role of state spending in creating the modern IT industry; computing, the semiconductor industry and the internet are all largely the outcome of US military spending.

Of course, the historical fact that the transformative, general purpose technologies that were so important in driving economic growth in the twentieth century emerged as a result of state sponsorship doesn’t by itself invalidate the Hayekian thesis that innovation is best left to the free market. To understand the limitations of this picture, we need to return to Hayek’s basic arguments. Under what circumstances does the free market fail to aggregate information in an optimal way? People are not always rational economic actors – they know what they want and need now, but they aren’t always good at anticipating what they might want if things they can’t imagine become available, or what they might need if conditions change rapidly. There’s a natural cognitive bias to give more weight to the present, and less to an unknowable future. Just like natural selection, the optimisation process that the market carries out is necessarily local, not global.

So when does the Hayekian argument for leaving innovation to the market not apply? The free market works well for evolutionary innovation – local optimisation is good at solving present problems with the tools at hand now. But it fails to be able to mobilise resources on a large scale for big problems whose solution will take more than a few years. So, we’d expect market-driven innovation to fail to deliver whenever timescales for development are too long, or the expense of development too great. Because capital markets are now short-term to the point of irrationality (as demonstrated by this study (PDF) from the Bank of England by Andrew Haldane), the private sector rejects long term investments in infrastructure and R&D, even if the net present value of those investments would be significantly positive. In the energy sector, for example, we saw widespread liberalisation of markets across the world in the 1990s. One predictable consequence of this has been a collapse of private sector R&D in the energy sector (illustrated for the case of the USA by Dan Kammen here – The Incredible Shrinking Energy R&D Budget (PDF)).

The contrast is clear if we compare two different cases of innovation – the development of new apps for the iPhone, and the development of innovative new passenger aircraft, like the composite-based Boeing Dreamliner and Airbus A350. The world of app development is one in which tens or hundreds of thousands of people can and do try out all sorts of ideas, a few of which have turned out to fulfil an important and widely appreciated need and have made their developers rich. This is a world that’s well described by the Hayekian picture of experimentation and evolution – the low barriers to entry and the ease of widespread distribution of the products rewards experimentation. Making a new airliner, in contrast, involves years of development and outlays of tens of billions of dollars in development cost before any products are sold. Unsurprisingly, the only players are two huge companies – essentially a world duopoly – each of whom is in receipt of substantial state aid of one form or another. The lesson is that technological innovation doesn’t just come in one form. Some innovation – with low barriers to entry, often building on existing technological platforms – can be done by individuals or small companies, and can be understood well in terms of the Hayekian picture. But innovation on a larger scale, the more radical innovation that leads to new general purpose technologies, needs either a large company with a protected income stream or outright state action. In the past the companies able to carry out innovation on this scale would typically have been a state sponsored “national champion”, supported perhaps by guaranteed defense contracts, or the beneficiary of a monopoly or cartel, such as the postwar Bell Labs.

If the prevalence of this Hayekian thinking about technological innovation really does mean that we’re less able now to introduce major, world-changing innovations than we were 50 years ago, this would matter a great deal. One way of thinking about this is in evolutionary terms – if technological innovation is only able to proceed incrementally, there’s a risk that we’re less able to adapt to sudden shocks, we’re less able to anticipate the future and we’re at risk of being locked into technological trajectories that we can’t alter later in response to unexpected changes in our environment or unanticipated consequences. I’ve written earlier about the suggestion that, far from seeing universal accelerating change, we’re currently seeing innovation stagnation. The risk is that we’re seeing less in the way of really radical innovation now, at a time when pressing issues like climate change, peak cheap oil and demographic transitions make innovation more necessary than ever. We are seeing a great deal of very rapid innovation in the world of information, but this rapid pace of change in one particular realm has obscured much less rapid growth in the material realm and the biological realm. It’s in these realms that slow timescales and the large scale of the effort needed mean that the market seems unable to deliver the innovation we need.

It’s not going to be possible, nor would it be desirable, for us to return to the political economies of the mid-twentieth century warfare states that delivered the new technologies that underlie our current economies. Whatever other benefits the turn to free markets may have delivered, it seems to have been less effective at providing radical innovation, and with the need for those radical innovations becoming more urgent, some rethinking is now urgently required.

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.