Feynman, Drexler, and the National Nanotechnology Initiative

It’s fifty years since Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom”, and this has been the signal for a number of articles reflecting on its significance. This lecture has achieved mythic importance in discussions of nanotechnology; to many, it is nothing less than the foundation of the field. This myth has been critically examined by Chris Tuomey (see this earlier post), who finds that the significance of the lecture is something that’s been attached retrospectively, rather than being apparent as serious efforts in nanotechnology got underway.

There’s another narrative, though, that is popular with followers of Eric Drexler. According to this story, Feynman laid out in his lecture a coherent vision of a radical new technology; Drexler popularised this vision and gave it the name “nanotechnology”. Then, inspired by Drexler’s vision, the US government launched the National Nanotechnology Initiative. This was then hijacked by chemists and materials scientists, whose work had nothing to do with the radical vision. In this way, funding which had been obtained on the basis of the expansive promises of “molecular manufacturing”, the Feynman vision as popularized by Drexler, has been used to research useful but essentially mundane products like stain resistant trousers and germicidal washing machines. To add insult to injury, the material scientists who had so successfully hijacked the funds then went on to belittle and ridicule Drexler and his theories. A recent article in the Wall Street Journal – “Feynman and the Futurists” – by Adam Keiper, is written from this standpoint, in a piece that Drexler himself has expressed satisfaction with on his own blog. I think this account is misleading at almost every point; the reality is both more complex and more interesting.

To begin with, Feynman’s lecture didn’t present a coherent vision at all; instead it was an imaginative but disparate set of ideas linked only by the idea of control on a small scale. I discussed this in my article in the December issue of Nature Nanotechnology – Feynman’s unfinished business (subscription required), and for more details see this series of earlier posts on Soft Machines (Re-reading Feynman Part 1, Part 2, Part 3).

Of the ideas dealt with in “Plenty of Room”, some have already come to pass and have indeed proved economically and societally transformative. These include the idea of writing on very small scales, which underlies modern IT, and the idea of making layered materials with precisely controlled layer thicknesses on the atomic scale, which was realised in techniques like molecular beam epitaxy and CVD, whose results you see every time you use a white light emitting diode or a solid state laser of the kind your DVD contains. I think there were two ideas in the lecture that did contribute to the vision popularized by Drexler – the idea of “a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on”, and, linked to this, the idea of doing chemical synthesis by physical processes. The latter idea has been realised at proof of principle level by the idea of doing chemical reactions using a scanning tunnelling microscope; there’s been a lot of work in this direction since Don Eigler’s demonstration of STM control of single atoms, no doubt some of it funded by the much-maligned NNI, but so far I think it’s fair to say this approach has turned out so far to be more technically difficult and less useful (on foreseeable timescales) than people anticipated.

Strangely, the second part of the fable, which talks about Drexler popularising the Feynman vision, I think actually underestimates the originality of Drexler’s own contribution. The arguments that Drexler made in support of his radical vision of nanotechnology drew extensively on biology, an area that Feynman had touched on only very superficially. What’s striking if one re-reads Drexler’s original PNAS article and indeed Engines of Creation is how biologically inspired the vision is – the models he looks to are the protein and nucleic acid based machines of cell biology, like the ribosome. In Drexler’s writing now (see, for example, this recent entry on his blog), this biological inspiration is very much to the fore; he’s looking to the DNA-based nanotechnology of Ned Seeman, Paul Rothemund and others as the exemplar of the way forward to fully functional, atomic scale machines and devices. This work is building on the self-assembly paradigm that has been such a big part of academic work in nanotechnology around the world.

There’s an important missing link between the biological inspiration of ribosomes and molecular motors and the vision of “tiny factories”- the scaled down mechanical engineering familiar from the simulations of atom-based cogs and gears from Drexler and his followers. What wasn’t fully recognised until after Drexler’s original work, was that the fundamental operating principles of biological machines are quite different from the rules that govern macroscopic machines, simply because the way physics works in water at the nanoscale is quite different to the way it works in our familiar macroworld. I’ve argued at length on this blog, in my book “Soft Machines”, and elsewhere (see, for example, “Right and Wrong Lessons from Biology”) that this means the lessons one should draw from biological machines should be rather different to the ones Drexler originally drew.

There is one final point that’s worth making. From the perspective of Washington-based writers like Kepier, one can understand that there is a focus on the interactions between academic scientists and business people in the USA, Drexler and his followers, and the machinations of the US Congress. But, from the point of view of the wider world, this is a rather parochial perspective. I’d estimate that somewhere between a quarter and a third of the nanotechnology in the world is being done in the USA. Perhaps for the first time in recent years a major new technology is largely being developed outside the USA, in Europe to some extent, but with an unprecedented leading role being taken in places like China, Korea and Japan. In these places the “nanotech schism” that seems so important in the USA simply isn’t relevant; people are just pressing on to where the technology leads them.

Why and how should governments fund basic research?

Yesterday I took part in a Policy Lab at the Royal Society, on the theme The public nature of science – Why and how should governments fund basic research? I responded to a presentation by Professor Helga Nowotny, the Vice-President of the European Research Council, saying something like the following:

My apologies to Helga, but my comments are going to be rather UK-centric, though I hope they illustrate some of the wider points she’s made.

This is a febrile time in British science policy.

We have an obsession amongst both the research councils and the HE funding bodies with the idea of impact – how can we define and measure the impact that research has on wider society? While these bodies are at pains to define impact widely, involving better policy outcomes, improvements in quality of life and broader culture, there is much suspicion that all that really counts is economic impact.

We have had a number of years in which the case that science produces direct and measurable effects on economic growth and jobs has been made very strongly, and has been rewarded by sustained increases in public science spending. There is a sense that these arguments are no longer as convincing as they were a few years ago, at least for the people in Treasury who are going to be making the crucial spending decisions at a time of fiscal stringency. As Helga argues, the relationship between economic growth in the short term, at a country level, and spending on scientific R&D is shaky, at best.

And in response to these developments, we have a deep unhappiness amongst the scientific community at what’s perceived as a shift from pure, curiosity driven, blue skies research into research and development.

What should our response to this be?

One response is to up the pressure on scientists to deliver economic benefits. This, to some extent, is what’s happening in the UK. One problem with this approach is that It probably overstates the importance of basic science in the innovation system. Scientists aren’t the only people who are innovators – innovation takes place in industry, in the public sector, it can involve customers and users too. Maybe our innovation system does need fixing, but it’s not obvious what needs most attention is what scientists do. But certainly, we should look at ways to open up the laboratory, as Helga puts it, and to look at the broader institutional and educational preconditions that allow science-based innovation to flourish.

Another response is to argue that the products of free scientific inquiry have intrinsic societal worth, and should be supported “as an ornament to civilisation”. Science is like the opera, something we support because we are civilised. One trouble with this argument is that it involves a certain degree of personal taste – I dislike opera greatly, and who’s to say that others won’t have the same feeling about astronomy? An even more serious argument is that we don’t actually support the arts that much, in financial terms, in comparison to the science budget. On this argument we’d be employing a lot fewer scientists than we are now (and probably paying them less).

A third response is to emphasise science’s role in solving the problems of society, but emphasising the long-term nature of this project. The idea is to direct science towards broad societal goals. Of course, as soon as one has said this one has to ask “whose goals?” – that’s why public engagement, and indeed politics in the most general sense, becomes important. In Helga’s words, we need to “recontextualise” science for current times. It’s important to stress that, in this kind of “Grand Challenge” driven science, one should specify a problem – not a solution. It is important, as well, to think clearly about different time scales, to put in place possibilities for the long term as well as responding to the short term imperative.

For example, the problem of moving to low-carbon energy sources is top of everyone’s list of grand challenges. We’re seeing some consensus (albeit not a very enthusiastic one) around the immediate need to build new nuclear power stations, to implement carbon capture and storage and to expand wind power, and research is certainly needed to support this, for example to reduce the high cost and energy overheads of carbon capture and storage. But it’s important to recognise that many of these solutions will be at best stop-gap, interim solutions, and to make sure we’re putting the research in place to enable solutions that will be sustainable for the long-term. We don’t know, at the moment, what these solutions will be. Perhaps fusion will finally deliver, maybe a new generation of cellulosic biofuels will have a role, perhaps (as my personal view favours) large scale cheap photovoltaics will be the solution. It’s important to keep the possibilities open.

So, this kind of societally directed, “Grand challenge”, inspired research isn’t necessarily short term, applied research, and although the practicalities of production and scale-up need to integrated at an early stage, it’s not necessarily driven by industry. It needs to preserve a diversity of approaches, to be robust in the face of our inevitable uncertainty.

One of Helga’s contributions to the understanding of modern techno-science has been the idea of “mode II knowledge production”, which she defined in an influential book with Michael Gibbons and others. In this new kind of science, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability.

This idea has been controversial. I think many people accept this represents the direction of travel of recent science. What’s at issue is whether it is a good thing; Helga and her colleagues have been at pains to stress that their work is purely descriptive, and implies no judgement of the desirability of these changes. But many of my colleagues in academic science think they are very definitely undesirable (see my earlier post Mode 2 and its discontents). One interesting point, though, is that in arguing against more directed ways of managing science, many people point to the many very valuable discoveries that have been serendipitously in the course of undirected, investigator driven research. Examples are manifold, from lasers to giant magneto-resistance, to restrict the examples to physics. It’s worth noting, though, that while this is often made as an argument against so-called “instrumental” science, it actually appeals to instrumental values. If you make this argument, you are already conceding that the purpose of science is to yield progress towards economic or political goals; you are simply arguing about the best way to organise science to achieve those goals.

Not that we should think this new. In the manifestos for modern science, written by Francis Bacon, that were so important in defining the mission of this society at its foundation three hundred and fifty years ago, the goal of science is defined as “an improvement in man’s estate and an enlargement of his power over nature”. This was a very clear contextualisation of science for the seventeenth century; perhaps our recontextualisation of science for the 21st century won’t prove so very different.

Easing the transition to a new solar economy

In the run up to the Copenhagen conference, a UK broadcaster has been soliciting opinions from scientists in response to the question “Which idea, policy or technology do you think holds the greatest promise or could deliver the greatest benefit for addressing climate change?”. Here’s the answer given by myself and my colleague Tony Ryan.”

We think the single most important idea about climate change is the optimistic one, that, given global will and a lot of effort to develop the truly sustainable technologies we need, we could emerge from some difficult years to a much more positive future, in which a stable global population lives prosperously and sustainably, supported by the ample energy resources of the sun.

We know this is possible in principle, because the total energy arriving on the planet every day from the sun far exceeds any projection of what energy we might need, even if the earth’s whole population enjoys the standard of living that we in the developed world take for granted.

Our problem is that, since the industrial revolution, we have become dependent on energy in a highly concentrated form, from burning fossil fuels. It’s this that has led, not just to our prosperity in the developed world, but to our very ability to feed the world at its current population levels. Before the industrial revolution, the limits on the population were set by the sun and by the productivity of the land; fossil fuels broke that connection (initially through mechanisation and distribution which led to a small increase in population, but in the last century by allowing us to greatly increase agricultural yields using nitrogen fertilizers made by the highly energy intensive Haber-Bosch process). Now we see that the last three hundred years have been a historical anomaly, powered by fossil fuels in a way that can’t continue. But we can’t go back to pre-industrial ways without mass starvation and a global disaster

So the new technologies we need are those that will allow us to collect, concentrate, store and distribute energy derived from the sun with greater efficiency, and on a much bigger scale, than we have at the moment. These will include new types of solar cells that can be made in very much bigger areas – in hectares and square kilometers, rather than the square meters we have now. We’ll need improvements in crops and agricultural technologies allowing us to grow more food and perhaps to use alternative algal crops in marginal environments for sustainable biofuels, without the need to bring a great deal of extra land into cultivation. And we’ll need new ways of moving energy around and storing it. Working technologies for renewable energies exist now; what’s important to understand is the problem of scale – they simply cannot be deployed on a big enough scale in a short enough time to fill our needs, and the needs of large and fast developing countries like India and China, for plentiful energy in a concentrated form. That’s why new science and new technology is urgently needed to develop these technologies.

This development will take time – with will and urgency, perhaps by 2030 we might see significant progress to a world powered by renewable, sustainable energy. In the meantime, the climate crisis becomes urgent. That’s why we need interim technologies, that already exist in prototype, that will allow us to cross the bridge to the new sunshine powered world. These technologies need development if they aren’t themselves going to store up problems for the future – we need to make carbon capture and storage affordable, and to implement a new generation of nuclear power plants that maximise reliability and minimise waste, and we need to learn how to use the energy we have more efficiently.

The situation we are in is urgent, but not hopeless; there is a positive goal worth striving for. But it will need more than modest lifestyle changes and policy shifts to get there; we need new science and new technology, developed not in the spirit of a naive attempt to implement a “technological fix”, but accompanied by a deep understanding of the world’s social and economic realities.

A crisis of trust?

One sometimes hears it said that there’s a “crisis of trust in science” in the UK, though this seems to be based on impressions rather than evidence. So it’s interesting to see the latest in an annual series of opinion polls comparing the degree of public trust in various professional groups. The polls, carried out by Ipsos Mori, are commissioned by the Royal College of Physicians, who naturally welcome the news that, yet again, doctors are the most trusted profession, with 92% of those polled saying they would trust doctors to tell the truth. But, for all the talk of a crisis of trust in science, scientists as a profession don’t do so badly, either, with 70% of respondents trusting scientists to tell the truth. To put this in context, the professions at the bottom of the table, politics and journalism, are trusted by only 13% and 22% respectively.

The figure below puts this information in some kind of historical context. Since this type of survey began, in 1983, there’s been a remarkable consistency – doctors are at the top of the trust league, journalists and politicians vie for the bottom place, and scientists emerge in the top half. But there does seem to be a small but systematic upward trend for the proportion trusting both doctors and scientists. A headline that would be entirely sustainable on these figures would be “Trust in scientists close to all time high”.

One wrinkle that it would be interesting to see explored more is the fact that there are some overlapping categories here. Professors score higher than scientists for trust, despite the fact that many scientists are themselves professors (me included). Presumably this reflects the fact that people lump together in this category scientists who work directly for government and for industry together with academic scientists; it’s a reasonable guess that the degree to which the public trusts scientists varies according to who they work for. One feature in this set of figures that does interest me is the relatively high degree of trust attached to civil servants, in comparison to the very low levels of trust in politicians. It seems slightly paradoxical that people trust the people who operate the machinery of government more than they trust those entrusted to oversee it on behalf of the people, but this does emphasise that there is by no means a generalised crisis of trust in our institutions in general; instead we see a rather specific failure of trust in politics and journalism, and to a slightly lesser extent business.

Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians.
Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians. Click on the plot for a larger version.

Moral hazard and geo-engineering

Over the last year of financial instability, we’ve heard a lot about moral hazard. This term originally arose in the insurance industry; there it refers to the suggestion that if people are insured against some negative outcome, they may be more liable to behave in ways that increase the risk of that negative outcome arising. So, if your car is insured for all kinds of accident damage, you might be tempted to drive that bit more recklessly, knowing that you won’t have to pay for all the consequences of an accident. In the last year, it’s been all too apparent that the banking system has seen more that its fair share of recklessness, and here the role of moral hazard seems pretty clear – why should one worry about the possibility of a lucrative bet going sour when you think that the taxpayer will bail out your bank, if it’s in danger of going under? The importance of the concept of moral hazard in financial matters is obvious, but it may also be useful when we’re thinking about technological choices.

This issue is raised rather clearly in a report released last week by the UK’s national science academy, the Royal Society – Geoengineering the climate: science, governance and uncertainty. This is an excellent report, but judging by the way it’s been covered in the news, it’s in danger of pleasing no-one. Those environmentalists who regard any discussion of geo-engineering at all as anathema will be dismayed that the idea is gaining any traction at all (and this point of view is not at all out of the mainstream, as this commentary from the science editor the Financial Times shows). Techno-optimists, on the other hand, will be impatient with the obvious serious reservations that the report has about the prospect of geo-engineering. The strongest endorsement of geo-engineering that the report makes is that we should think of it as a plan B, an insurance policy in case serious reductions in CO2 emission don’t prove possible. But, if investigating geo-engineering is an insurance policy, the report asks, won’t it subject us to the precise problem of moral hazard?

Unquestionably, people unwilling to confront the need for the world to make serious reductions to CO2 emissions will take comfort in the idea that geo-engineering might offer another way of mitigating dangerous climate change; in this sense the parallel with moral hazard in insurance and banking is exact. There are parallels in the potential catastrophic consequences of this moral hazard, as well. It’s likely that the largest costs won’t fall on the people who benefit most from the behaviour that’s encouraged by the belief that geo-engineering will be able to save them from the worst consequences of their actions. And in the event of the insurance policy being needed, it may not be able to pay out – the geo-engineering methods available may not end up being sufficient to avert disaster (and, indeed, through unanticipated consequences may make matters worse). On the other hand, the report wonders whether seeing geo-engineering being taken seriously might have the opposite effect – convincing some people that if such drastic measures are being contemplated, then urgent action to reduce emissions really is needed. I can’t say I’m hugely convinced by this last argument.

Food nanotechnology – their Lordships deliberate

Today I found myself once again in Westminster, giving evidence to a House of Lords Select Committee, which is currently carrying out an inquiry into the use of nanotechnology in food. Readers not familiar with the intricacies of the British constitution need to know that the House of Lords is one of the branches of Parliament, the UK legislature, with powers to revise and scrutinise legislation, and through its select committees, hold the executive to account. Originally its membership was drawn from the hereditary peerage, with a few bishops thrown in; recently as part of a slightly ramshackle program of constitutional reform the influence of the hereditaries has been much reduced, with the majority of the chamber being made up of members appointed for life by the government. These are drawn from former politicians and others prominent in public life. Whatever the shortcomings of this system from the democratic point of view, it does mean that the membership includes some very well informed people. This inquiry, for example, is being chaired by Lord Krebs, a very distinguished scientist who previously chaired the Food Standards Agency.

All the evidence submitted to the committee is publicly available on their website; this includes submissions from NGOs, Industry Organisations, scientific organisations and individual scientists. There’s a lot of material there, but together it’s actually a pretty good overview of all sides of the debate. I’m looking forward to seeing their Lordships’ final report.

Are electric cars the solution?

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

The Economy of Promises

This essay was first published in Nature Nanotechnology 3 p65 (2008), doi:10.1038/nnano.2008.14.

Can nanotechnology cure cancer by 2015? That’s the impression that many people will have taken from the USA’s National Cancer Institute’s Cancer Nanotechnology Plan [1], which begins with the ringing statement “to help meet the Challenge Goal of eliminating suffering and death from cancer by 2015, the National Cancer Institute (NCI) is engaged in a concerted effort to harness the power of nanotechnology to radically change the way we diagnose, treat, and prevent cancer.” No-one doubts that nanotechnology potentially has a great deal to contribute to the struggle against cancer; new sensors promise earlier diagnosis, and new drug delivery systems for chemotherapy offer useful increases in survival rates. But this is a long way from eliminating suffering and death within 7 years. Now, a close textual analysis of the NCI’s document shows that actually there’s no explicit claim that nanotechnology will cure cancer by 2015; the talk is of “challenge goals” and “lowering barriers”. But is it wise to make it so easy to draw this conclusion from a careless reading?

It’s hardly a new insight to observe that the development of nanotechnology has been accompanied by exaggeration and oversold promises (there is, indeed, a comprehensive book documenting this aspect of the subject’s history – Nanohype, by David Berube [2]). It’s tempting for scientists to plead their innocence and try to maintain some distance from this. After all, the origin of the science fiction visions of nanobots and universal assemblers is in fringe movements such as the transhumanists and singularitarians, rather than mainstream nanoscience. And the hucksterism that has gone with some aspects of the business of nanotechnology seems to many scientists a long way from academia. But are scientists completely blameless in the development of an “economy of promises” surrounding nanotechnology?

Of course, the way most people hear about new scientific developments is through the mass media rather than through the scientific literature. The process by which a result from an academic nano-laboratory is turned into an item in the mainstream media naturally emphasises dramatic and newsworthy potential impacts of the research; the road from the an academic paper to a press release from a University press office is characterised by a systematic stripping away of the cautious language, and a transformation of vague possible future impacts into near-certain outcomes. The key word here is “could” – how often do we read in the press release accompanying a solid, but not revolutionary, paper in Nature or Physical Review Letters that the research “could” lead to revolutionary and radical developments in technology or medicine?

Practical journalism can’t deal with the constant hedging that comes so naturally to scientists, we’re told, so many scientists acquiesce in this process. The chosen “expert” commentators on these stories are often not those with the deepest technical knowledge of issues, but those who combine communication skills with a willingness to press an agenda of superlative technology outcomes.

An odd and unexpected feature of the way the nanotechnology debate has unfolded is that the concern to anticipate societal impacts and consider ethical dimensions of nanotechnology has itself contributed to the climate of heightened expectations. As the philosopher Alfred Nordmann notes in his paper If and then: a critique of speculative nanoethics (PDF) [3], speculations on the ethical and societal implications of the more extreme extrapolations of nanotechnology serve implicitly to give credibility to such visions. If a particular outcome of technology is conceivable and cannot be demonstrated to be contrary to the laws of nature, then we are told it is irresponsible not to consider its possible impacts on society. In this way questions of plausibility or practicality are put aside. In the case of nanotechnology, we have organisations like the Foresight Nanotech Institute and the Centre for Responsible Nanotechnology, whose ostensible purpose is to consider the societal implications of advanced nanotechnology, but which in reality are advocacy organisations for the particular visions of radical nanotechnology originally associated with Eric Drexler. As the field of “nanoethics” grows, and brings in philosophers and social scientists, it’s inevitable that there will be a tendency to give these views more credibility than academic nanoscientists would like.

Scientists, then, can feel a certain powerlessness about the way the more radical visions of nanotechnology have taken root in the public sphere and retain their vigour. It may seem that there’s not a lot scientists can do about the media treats science stories; certainly no-one made much of a media career by underplaying the potential significance of scientific developments. This isn’t to say that within the constraints of the requirements of the media, scientists shouldn’t exercise responsibility and integrity. But perhaps the “economy of promises” is embedded more deeply in the scientific enterprise than this.

One class of document that is absolutely predicated on promises is the research proposal. As we see more and more pressure from funding agencies to do research with a potential economic impact, it’s inevitable that scientists will get into the habit of making more firmly what might be quite tenuous claims that their research will lead to spectacular outcomes. It’s perhaps also understandable that the conflict between this and more traditional academic values might lead to a certain cynicism; scientists have their own ways of justifying their work to themselves, which might mitigate any guilt they might feel about making inflated or unfeasible claims about the ultimate applications of their work. One way of justifying what might seem somewhat reckless claims about is the observation that science and technology have indeed produced huge impacts on society and the economy, even if these impacts were unforeseen at the time of the original research work. Thus one might argue to oneself that even though the claims made by researchers individually might be implausible, collectively one might have a great deal more confidence that the research enterprise as a whole will deliver important results.

Thus scientists may not be at all confident that their own work will have a big impact, but are confident that science in general will deliver big benefits. On the other hand, the public have long memories for promises that science and technology have made but failed to deliver (the idea that nuclear power would produce electricity “too cheap to meter” being one of the most notorious). This, if nothing else, suggests that the nanoscience community would do well to be responsible in what they promise.

1. http://nano.cancer.gov/about_alliance/cancer_nanotechnology_plan.asp
2. Berube, D. Nanohype, (Prometheus Books, Amherst NY, 2006)
3. Nordmann, A. NanoEthics 1, 31-46 (2007).

Brownian motion and how to run a lottery (or a bank)

This entry isn’t really about nanotechnology at all; instead it’s a ramble around some mathematics that I find interesting, that suddenly seems to have become all too relevant in the financial crisis we find ourselves in. I don’t claim great expertise in finance, so my apologies in advance for any inaccuracies.

Brownian motion – the continuous random jiggling of nanoscale objects and structures that’s a manifestation of the random nature of heat energy – is a central feature of the nanoscale world, and much of my writing about nanotechnology revolves around how we should do nanoscale engineering in a way that exploits Brownian motion, in the way biology does. In this weekend’s magazine reading, I was struck to see some of the familiar concepts from the mathematics of Brownian motion showing up, not in Nature, but in an article in The Economist’s special section on the future of the finance – In Plato’s Cave, which explains how much of the financial mess we find ourselves in derives from the misapplication of these ideas. Here’s my attempt to explain, as simply as possible, the connection.

The motion of a particle undergoing Brownian motion can be described as a random walk, with a succession of steps in random directions. For every step taken in one direction, there’s an equal probability that the particle will go the same distance in the opposite direction, yet on average a particle doing a random walk does make some progress – the average distance gone grows as the square root of the number of steps. To see this for a simple situation, imagine that the particle is moving on a line, in one dimension, and either takes a step of one unit to the right (+1) or one unit to the left (-1), so we can track its progress just by writing down all the steps and adding them up, like this, for example: (+1 -1 +1 …. -1) . After N steps, on average the displacement (i.e. the distance gone, including a sign to indicate the direction) will be zero, but the average magnitude of the distance isn’t zero. To see this, we just look at the square root of the average value of the square of the displacement (since squaring the displacement takes away any negative signs). So we need to expand a product that looks something like (+1 -1 +1 …. -1) x (+1 -1 +1 …. -1). The first term of the first bracket times the first term of the second bracket is always +1 (since we either have +1 x +1 or -1 x -1), and the same is true for all the products of terms in the same position in both brackets. There are N of these, so this part of the product adds up to N. All the other terms in the expansion are one of (+1 x +1), (+1 x -1), (-1 x +1), (-1 x -1), and if the successive steps in the walk really are uncorrelated with each other these occur with equal probability so that on average adding all these up gives us zero. So we find that the mean squared distance gone in N steps is N. Taking the square root of this to get a measure of the average distance gone in N steps, we find this (root mean squared) distance is the square root of N.

The connection of these arguments to financial markets is simple. According the efficient market hypothesis, at any given time all the information relevant to the price of some asset, like a share, is already implicit in its price. This implies that the movement of the price with time is essentially a random walk. So, if you need to calculate what a fair value is for, say, an option to buy this share in a year’s time, you can do this equipped with statistical arguments about the likely movement of a random walk, of the kind I’ve just outlined. It is a smartened-up version of the theory of random walks that I’ve just explained that is the basis of the Black-Scholes model for pricing options, which is what made the huge expansion of trading of complex financial derivatives possible – as the Economist article puts it “The Black-Scholes options-pricing model was more than a piece of geeky mathematics. It was a manifesto, part of a revolution that put an end to the anti-intellectualism of American finance and transformed financial markets from bull rings into today’s quantitative powerhouses… The new model showed how to work out an option price from the known price-behaviour of a share and a bond. … . Confidence in pricing gave buyers and sellers the courage to pile into derivatives. The better that real prices correlate with the unknown option price, the more confidently you can take on any level of risk.”

Surely such a simple model can’t apply to a real market? Of course, we can develop more complex models that lift many of the approximations in the simplest theory, but it turns out that some of the key results of the theory remain. The most important result is the basic √N scaling of the expected movement. For example, my simple derivation assumed all steps are the same size – we know that some days, prices rise or fall a lot, sometimes not so much. So what happens if we have a random walk with step sizes that are themselves random. It’s easy to convince oneself that the derivation stays the same, but instead of adding up N occurrences of (-1 x -1) or (+1 x +1) we have N occurrences of (a x a), where the probability that the step size has value a is given by p(a). So we end up with the simple modification that the mean squared distance gone is N times the mean of the square of the step size. So this is a fairly simple modification, which, crucially, doesn’t affect the √N scaling.

But, and this is the big but, there’s a potentially troublesome hidden assumption here, which is that the distribution of step sizes actually has a well defined, well behaved mean squared value. We’d probably guess that the distribution of step sizes looks like a bell shaped curve, centred on zero and getting smaller the further away one gets from the origin. The familiar Gaussian curve fits the bill, and indeed such a curve is characterised by a well defined mean squared value which measures the width of the curve ( mathematically, a Gaussian is described by a distribution of step sizes a given by p(a) proportional to exp(-a/2s^2), which gives a root mean squared value of step size s). Gaussian curves are very common, for reasons described later, so this all looks very straightforward. But one should be aware that not all bell-shaped curves behave so well. Consider a distribution of step sizes a given by p(a) proportional to 1/(a^2+s^2). This curve (which is known in the trade as a Lorentzian), looks bell shaped and is characterised by a width s. But, when we try to find the average value of the square of the step size, we get an answer that diverges – it’s effectively infinite. The problem is that although the probability of getting a very large step goes to zero as the step size gets larger, it doesn’t go to zero very fast. Rather than the chance of a very large jump becoming exponentially small, as happens for a Gaussian, the chance goes to zero as the inverse square of the step size. This apparently minor difference is enough to completely change the character of the random walk. One needs entirely new mathematics to describe this sort of random walk (which is known as a Levy flight) – and in particular one ends up with a different scaling of the distance gone with the number of steps.

In the jargon, this kind of distribution is known as having a “fat tail”, and it was not factoring in the difference between a fat tailed distribution and a Gaussian or normal distribution that led the banks to so miscalculate their “value at risk”. In the words of the Economist article, the mistake the banks made “was to turn a blind eye to what is known as “tail risk”. Think of the banks’ range of possible daily losses and gains as a distribution. Most of the time you gain a little or lose a little. Occasionally you gain or lose a lot. Very rarely you win or lose a fortune. If you plot these daily movements on a graph, you get the familiar bell-shaped curve of a normal distribution (see chart 4). Typically, a VAR calculation cuts the line at, say, 98% or 99%, and takes that as its measure of extreme losses. However, although the normal distribution closely matches the real world in the middle of the curve, where most of the gains or losses lie, it does not work well at the extreme edges, or “tails”. In markets extreme events are surprisingly common—their tails are “fat”. Benoît Mandelbrot, the mathematician who invented fractal theory, calculated that if the Dow Jones Industrial Average followed a normal distribution, it should have moved by more than 3.4% on 58 days between 1916 and 2003; in fact it did so 1,001 times. It should have moved by more than 4.5% on six days; it did so on 366. It should have moved by more than 7% only once in every 300,000 years; in the 20th century it did so 48 times.”

But why should the experts in the banks have made what seems such an obvious mistake? One possibility goes back to the very reason why the Gaussian, or normal, distribution, is so important and seems so ubiquitous. This comes from a wonderful piece of mathematics called the central limit theorem. This says that if some random variable is made up from the combination of many independent variables, even if those variables aren’t themselves taken from a Gaussian distribution, their sum will be in the limit of many variables. So, given that market movements are the sum of the effects of lots of different events, the central limit theorem would tell us to expect the size of the total market movement to be distributed according to a Gaussian, even if the individual events were described by a quite different distribution. The central limit theorem has a few escape clauses, though, and perhaps the most important one arises from the way one approaches the limit of large numbers. Roughly speaking, the distribution converges to a Gaussian in the middle first. So it’s very common to find empirical distributions that look Gaussian enough in the middle, but still have fat tails, and this is exactly the point Mandelbrot is quoted as making about the Dow Jones.

The Economist article still leaves me puzzled, though as everything I’ve been describing has been well known for many years. But maybe well known isn’t the same as widely understood. Just like a lottery, the banks were trading the certainty of many regular small payments against a small probability of making a big payout. But, unlike the lottery, they didn’t get the price right, because they underestimated the probability of making a big loss. And now, their loss becomes the loss of the world’s taxpayers.