Can carbon capture and storage work?

Across the world, governments are placing high hopes on carbon capture and storage as the technology that will allow us to go on meeting a large proportion of the world’s growing energy needs from high carbon fossil fuels like coal. The basic technology is straightforward enough; in one variant one burns the coal as normal, and then takes the flue gases through a process to separate the carbon dioxide, which one then pipes off and shuts away in a geological reservoir, for example down an exhausted natural gas field. There are two alternatives to this simplest scheme; one can separate the oxygen from the nitrogen in the air and then burn the fuel in pure oxygen, producing nearly pure carbon dioxide for immediate disposal. Or in a process reminiscent of that used a century ago to make town gas, one can gasify coal to produce a mixture of carbon dioxide and hydrogen, remove the carbon dioxide from the mixture and burn the hydrogen. Although the technology for this all sounds straightforward enough, a rather sceptical article in last week’s Economist, Trouble in Store, points out some difficulties. The embarrassing fact is that, for all the enthusiasm from politicians, no energy utility in the world has yet built a large power plant using carbon capture and storage. The problem is purely one of cost. The extra capital cost of the plant is high, and significant amounts of energy need to be diverted to do the necessary separation processes. This puts a high (and uncertain) price on each tonne of carbon not emitted.

Can technology bring this cost down? This question was considered in a talk last week by Professor Mercedes Maroto-Valer from the University of Nottingham’s Centre for Innovation in Carbon Capture and Storage. The occasion for the talk was a meeting held last Friday to discuss environmentally beneficial applications of nanotechnology; this formed part of the consultation process about the third Grand Challenge to be funded in nanotechnology by the UK’s research council. A good primer on the basics of the process can be found in the IPCC special report on carbon capture. At the heart of any carbon capture method is always a gas separation process. This might be helped by better nanotechnology-enabled membranes, or nanoporous materials (like molecular sieve materials) that can selectively absorb and release carbon dioxide. These would need to be cheap and capable of sustaining many regeneration cycles.

This kind of technology might help by bringing the cost of carbon capture and storage down from its current rather frightening levels. I can’t help feeling, though, that carbon capture and storage will always remain a rather unsatisfactory technology for as long as its costs remain a pure overhead – thus finding something useful to do with the carbon dioxide is a hugely important step. This is another reason why I think the “methanol economy” deserves serious attention. The idea here is to use methanol as an energy carrier, for example as a transport fuel which is compatible with existing fuel distribution infrastructures and the huge installed base of internal combustion engines. A long-term goal would be to remove carbon dioxide from the atmosphere and use solar energy to convert it into methanol for use as a completely carbon-neutral transport fuel and as a feedstock for the petrochemical industry. The major research challenge here is to develop scalable systems for the photocatalytic reduction of carbon dioxide, or alternatively to do this in a biologically based system. Intermediate steps to a methanol economy might use renewably generated electricity to provide the energy for the creation of methanol from water and carbon dioxide from coal-fired power stations, extracting “one more pass” of energy from the carbon before it is released into the atmosphere. Alternatively process heat from a new generation nuclear power station could be used to generate hydrogen for the synthesis of methanol from carbon dioxide captured from a neighboring fossil fuel plant.

Natural complexity, engineering simplicity

One of the things that makes mass production possible is the large-scale integration of nearly identical parts. Much engineering design is based on this principle, which is taken to extremes in microelectronics; a modern microprocessor will contain several hundred million transistors, every one of which needs to be manufactured to very high tolerances if the device is to work at all. One might think that similar considerations would apply to biology. After all, the key components of biological nanotechnology – the proteins that are the key components of most of the nanoscale machinery of the cell – are specified by the genetic code down to the last atom, and in many cases are folded in a unique three dimensional configuration. It turns out, though, that this is not the case; biology actually has sophisticated mechanisms whose entire purpose is to introduce extra variation into its components.

This point was forcefully made by Dennis Bray in an article in Science magazine in 2003: called Molecular Prodigality (PDF version from Bray’s own website). Protein sequences can be chopped and changed, after the DNA code has been read, by processes of RNA editing and splicing and other types of post-translational modification, and these can lead to distinct changes in the operation of machines made from these proteins. Bray cites as an example the potassium channels in squid nerve axons; one of the component proteins can be altered by RNA editing in up to 13 distinct places, changing the channel’s operating parameters. He calculates that the random combination of all these possibilities means that there are 4.5 ×1015 subtly different possible types of potassium channels. This isn’t an isolated example; Bray estimates that up to a half of human structural genes allow some such variation, with the brain and nervous system being particularly rich in molecular diversity.

It isn’t at all clear what all this variation is for, if anything. One can speculate that some of this variability has evolved to increase the adaptability of organisms to unpredictable changes in environmental conditions. This is certainly true for the case of the adaptive immune system. A human has the ability to make 1012 different types of antibody, using combinatorial mechanisms to generate a huge library of different molecules, each of which has the potential to recognise characteristic target molecules on pathogens that we’ve yet to be exposed to. This is an example of biology’s inherent complexity; human engineering, in contrast, strives for simplicity.

Nanobots, nanomedicine, Kurzweil, Freitas and Merkle

As Tim Harper observes, with the continuing publicity surrounding Ray Kurzweil, it seems to be nanobot week. In one further contribution to the genre, I’d like to address some technical points made by Rob Freitas and Ralph Merkle in response to my article from last year, Rupturing the Nanotech Rapture, in which I was critical of their vision of nanobots (my thanks to Rob Freitas for bringing their piece to my attention in a comment on my earlier entry). Before jumping straight into the technical issues, it’s worth trying to make one point clear. While I think the vision of nanobots that underlies Kurzweil’s extravagant hopes is flawed, the enterprise of nanomedicine itself has huge promise. So what’s the difference?

We can all agree on why nanotechnology is potentially important for medicine. The fundamental operations of cell biology all take place on the nanoscale, so if we wish to intervene in those operations, there is a logic to carrying out these interventions at the right scale, the nanoscale. But the physical environment of the warm, wet nano-world is a very unfamiliar one, dominated by violent Brownian motion, the viscosity dominated regime of low Reynolds number fluid dynamics, and strong surface forces. This means that the operating principles of cell biology rely on phenomena that are completely unfamiliar in the macroscale world – phenomena like self-assembly, molecular recognition, molecular shape change, diffusive transport and molecule-based information processing. It seems to me that the most effective interventions will use the same “soft nanotechnology” paradigm, rather than being based on a mechanical paradigm that underlies the Freitas/Merkle vision of nanobots, which is inappropriate for the warm wet nanoscale world that our biology works in. We can expect to see increasingly sophisticated drug delivery devices, targeted to the cellular sites of disease, able to respond to their environment, and even able to perform simple molecule-based logical operations to decide appropriate responses to their situation. This isn’t to say that nanomedicine of any kind is going to be easy. We’re still some way away from being able to completely disentangle the sheer complexity of the cell biology that underlies diseases such as cancer or rheumatoid arthritis, while for other hugely important conditions like Alzheimer’s there isn’t even consensus on the ultimate cause of the disease. It’s certainly reasonable to expect improved treatments and better prospects for sufferers of serious diseases, including age-related ones, in twenty years or so, but this is a long way from the prospects of seamless nanobot-mediated neuron-computer interfaces and indefinite life-extension that Kurzweil hopes for.

I now move on to the specific issues raised in the response from Freitas and Merkle.

Several items that Richard Jones mentions are well-known research challenges, not showstoppers.

Until the show has actually started, this of course is a matter of opinion!

All have been previously identified as such along with many other technical challenges not mentioned by Jones that we’ve been aware of for years.

Indeed, and I’m grateful that the cited page acknowledges my earlier post Six Challenges for Molecular Nanotechnology. However, being aware of these and other challenges doesn’t make them go away.

Unfortunately, the article also evidences numerous confusions: (1) The adhesivity of proteins to nanoparticle surfaces can (and has) been engineered;

Indeed, polyethylene oxide/glycol end-grafted polymers (brushes) are commonly used to suppress protein adsorption at liquid/solid interfaces (and less commonly, brushes of other water soluble polymers, as in the link, can be used). While these methods work pretty well in vitro, they don’t work very well in vivo, as evidenced by the relatively short clearing times of “stealth” liposomes, which use a PEG layer to avoid detection by the body. The reasons for this are still aren’t clear, as the fundamental mechanisms by which brushes suppress protein adsorption aren’t yet fully understood.

(2) nanorobot gears will reside within sealed housings, safe from exposure to potentially jamming environmental bioparticles;

This assumes that “feed-throughs” permitting traffic in and out of the controlled environment while perfectly excluding contaminants are available (see point 5 of my earlier post Six Challenges for Molecular Nanotechnology). To date I don’t see a convincing design for these.

(3) microscale diamond particles are well-documented as biocompatible and chemically inert;

They’re certainly chemically inert, but the use of “biocompatible” here betrays a misunderstanding; the fact that proteins adsorb to diamond surfaces is experimentally verified and to be expected. Diamond-like carbon is used as a coating in surgical implants and stents and is biocompatible in the sense that it doesn’t cause cytotoxicity or inflammatory reactions. It’s biocompatibility with blood is also good, in the sense that it doesn’t lead to thrombus formation. But this isn’t because proteins don’t adsorb to the surface; it is because there’s a preferential adsorption of albumin rather than fibrinogen, which is correlated with a lower tendency of platelets to attach to the surface (see e.g. R. Hauert, Diamond and Related Materials 12 (2003) 583). For direct experimental measurements of protein adsorption to an amorphous diamond-like film see, for example, here. Almost all this work has been done, not on single crystal diamond, but on polycrystalline or amorphous diamond-like films, but there’s no reason to suppose the situation will be any different for single crystals; these are simply hydrophobic surfaces of the kind that proteins all too readily adsorb to.

(4) unlike biological molecular motors, thermal noise is not essential to the operation of diamondoid molecular motors;

Indeed, in contrast to the operation of biological motors, which depend on thermal noise, noise is likely to be highly detrimental to the operation of diamondoid motors. Which, to state the obvious, is a difficulty in the environment of the body where such thermal noise is inescapable.

(5) most nanodiamond crystals don’t graphitize if properly passivated;

Depends what you mean by most, I suppose. Raty et al. (Phys Rev Letts 90 art037401, 2003) did quantum simulation calculations showing that 1.2 nm and 1.4 nm ideally terminated diamond particles would undergo spontaneous surface reconstruction at low temperature. The equilibrium surface structure will depend on shape and size, of course, but you won’t know until you do the calculations or have some experiments.

(6) theory has long supported the idea that contacting incommensurate surfaces should easily slide and superlubricity has been demonstrated experimentally, potentially allowing dramatic reductions in friction inside properly designed rigid nanomachinery;

Superlubricity is an interesting phenomenon in which friction falls to very low (though probably non-zero) values when rigid surfaces are put together out of crystalline register and slide past one another. The key sentence above is “properly designed rigid nanomachinery”. Diamond has very low friction macroscopically because it is very stiff, but nanomachines aren’t going to be built out of semi-infinite blocks of the stuff. Measured by, for example, the average relative thermal displacements observed at 300K diamondoid nanomachines are going to be rather floppy. It remains to be seen how important this is going to be in permitting leakage of energy out of the driving modes of the machine into thermal energy, and we need to see some simulations of dynamic friction in “properly designed rigid nanomachinery”.

(7) it is hardly surprising that nanorobots, like most manufactured objects, must be fabricated in a controlled environment that differs from the application environment;

This is a fair point as far as it goes. But consider why it is that an integrated circuit, made in a controlled ultra-clean environment, works when it is brought out into the scruffiness of my office. It’s because it can be completely sealed off, with traffic in and out of the IC carried out entirely by electrical signals. Our nanobot, on the other hand, will need to communicate with its environment by the actual traffic of molecules, hence the difficulty of the feed-through problem referred to above.

(8) there are no obvious physical similarities between a microscale nanorobot navigating inside a human body (a viscous environment where adhesive forces control) and a macroscale rubber clock bouncing inside a clothes dryer (a ballistic environment where inertia and gravitational forces control);

The somewhat strained nature of this simile illustrates the difficulty of conceiving the very foreign and counter-intuitive nature of the warm, wet, nanoscale world. This is exactly why the mechanical engineering intuitions that underlie the diamondoid nanobot vision are so misleading.

and (9) there have been zero years, not 15 years, of “intense research” on diamondoid nanomachinery (as opposed to “nanotechnology”). Such intense research, while clearly valuable, awaits adequate funding

I have two replies to this. Firstly, even accepting the very narrow restriction to diamondoid nanomachinery, I don’t see how the claim of “zero years” squares with what Freitas and Merkle have been doing themselves, as I know that both were employed as research scientists at Zyvex, and subsequently at the Institute of Molecular Manufacturing. Secondly, there has been a huge amount of work in nanomedicine and nanoscience directly related to these issues. For example, the field of manipulation and reaction of individual atoms on surfaces directly underlies the visions of mechanosynthesis that are so important to the Freitas/Merkle route to nanotechnology dates back to Don Eigler’s famous 1990 Nature paper; this paper has since been cited by more than 1300 other papers, which gives an indication of how much work there’s been in this area worldwide.

— as is now just beginning.

And I’m delighted by Philip Moriarty’s fellowship too!

I’ve responded to these points at length, since we frequently read complaints from proponents of MNT that no-one is prepared to debate the issues at a technical level. But I do this with some misgivings. It’s very difficult to prove a negative, and none of my objections amounts to a proof of physical impossibility. But what is not forbidden by the laws of physics is not necessarily likely, let alone inevitable. When one is talking about such powerful human drives as the desire not to die, and the urge to reanimate deceased loved ones, it’s difficult to avoid the conclusion that rational scepticism may be displaced by deeper, older human drives.

Brain interfacing with Kurzweil

The ongoing discussion of Ray Kurzweil’s much publicized plans for a Singularity University prompted me to take another look at his book “The Singularity is Near”. It also prompted me to look up the full context of the somewhat derogatory quote from Douglas Hofstadter that the Guardian used and I reproduced in my earlier post. This can be found in this interview“it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.” Looking again at the book, it’s clear this is right on the mark. One difficulty is that Kurzweil makes many references to current developments in science and technology, and most readers are going to take it on trust that Kurzweil’s account of these developments is accurate. All too often, though, what one finds is that there’s a huge gulf between the conclusions Kurzweil draws from these papers and what they actually say – it’s the process I described in my article The Economy of Promises taken to extremes – “a transformation of vague possible future impacts into near-certain outcomes”. Here’s a fairly randomly chosen, but important, example.

In this prediction, we’re in the year 2030 (p313 in my edition). “Nanobot technology will provide fully immersive, totally convincing virtual reality”. What is the basis for this prediction? “We already have the technology for electronic devices to communicate with neurons in both directions, yet requiring no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively can cause a nearby neuron to fire or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. As mentioned above, quantum dots have also shown the ability to provide non-invasive communication between neurons and electronics.” The statements are supported by footnotes, with impressive looking references to the scientific literature. The only problem is, that if one goes to the trouble of looking up the references, one finds that they don’t say what he says they do.

The reference to “scientists at the MPI” refers to Peter Fromherz, who has been extremely active in developing ways of interfacing nerve cells with electronic devices – field effect transistors to be precise. I discussed this research in an earlier post – Brain chips – the paper cited by Kurzweil is Weis and Fromherz, PRE, 55 877 (1977) (abstract). Fromherz’s work does indeed demonstrate two-way communication between neurons and transistors. However, it emphatically does not do this in a way that needs no physical contact with neurons – the neurons need to be in direct contact with the gate of the FET, and this is achieved by culturing neurons in-situ. This restricts the method to specially grown, 2-dimensional arrays of neurons, not real brains. The method hasn’t been demonstrated to work in-vivo, and it’s actually rather difficult to see how this could be done. As Fromherz himself says, “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.”

What of the quantum dots, that “have also shown the ability to provide non-invasive communication between neurons and electronics”? The paper referred to here is Winter et al, Recognition Molecule Directed Interfacing Between Semiconductor Quantum Dots and Nerve Cells, Advanced Materials 13 1673 (2001) ( Richard JonesPosted on Categories General

The Economy of Promises

This essay was first published in Nature Nanotechnology 3 p65 (2008), doi:10.1038/nnano.2008.14.

Can nanotechnology cure cancer by 2015? That’s the impression that many people will have taken from the USA’s National Cancer Institute’s Cancer Nanotechnology Plan [1], which begins with the ringing statement “to help meet the Challenge Goal of eliminating suffering and death from cancer by 2015, the National Cancer Institute (NCI) is engaged in a concerted effort to harness the power of nanotechnology to radically change the way we diagnose, treat, and prevent cancer.” No-one doubts that nanotechnology potentially has a great deal to contribute to the struggle against cancer; new sensors promise earlier diagnosis, and new drug delivery systems for chemotherapy offer useful increases in survival rates. But this is a long way from eliminating suffering and death within 7 years. Now, a close textual analysis of the NCI’s document shows that actually there’s no explicit claim that nanotechnology will cure cancer by 2015; the talk is of “challenge goals” and “lowering barriers”. But is it wise to make it so easy to draw this conclusion from a careless reading?

It’s hardly a new insight to observe that the development of nanotechnology has been accompanied by exaggeration and oversold promises (there is, indeed, a comprehensive book documenting this aspect of the subject’s history – Nanohype, by David Berube [2]). It’s tempting for scientists to plead their innocence and try to maintain some distance from this. After all, the origin of the science fiction visions of nanobots and universal assemblers is in fringe movements such as the transhumanists and singularitarians, rather than mainstream nanoscience. And the hucksterism that has gone with some aspects of the business of nanotechnology seems to many scientists a long way from academia. But are scientists completely blameless in the development of an “economy of promises” surrounding nanotechnology?

Of course, the way most people hear about new scientific developments is through the mass media rather than through the scientific literature. The process by which a result from an academic nano-laboratory is turned into an item in the mainstream media naturally emphasises dramatic and newsworthy potential impacts of the research; the road from the an academic paper to a press release from a University press office is characterised by a systematic stripping away of the cautious language, and a transformation of vague possible future impacts into near-certain outcomes. The key word here is “could” – how often do we read in the press release accompanying a solid, but not revolutionary, paper in Nature or Physical Review Letters that the research “could” lead to revolutionary and radical developments in technology or medicine?

Practical journalism can’t deal with the constant hedging that comes so naturally to scientists, we’re told, so many scientists acquiesce in this process. The chosen “expert” commentators on these stories are often not those with the deepest technical knowledge of issues, but those who combine communication skills with a willingness to press an agenda of superlative technology outcomes.

An odd and unexpected feature of the way the nanotechnology debate has unfolded is that the concern to anticipate societal impacts and consider ethical dimensions of nanotechnology has itself contributed to the climate of heightened expectations. As the philosopher Alfred Nordmann notes in his paper If and then: a critique of speculative nanoethics (PDF) [3], speculations on the ethical and societal implications of the more extreme extrapolations of nanotechnology serve implicitly to give credibility to such visions. If a particular outcome of technology is conceivable and cannot be demonstrated to be contrary to the laws of nature, then we are told it is irresponsible not to consider its possible impacts on society. In this way questions of plausibility or practicality are put aside. In the case of nanotechnology, we have organisations like the Foresight Nanotech Institute and the Centre for Responsible Nanotechnology, whose ostensible purpose is to consider the societal implications of advanced nanotechnology, but which in reality are advocacy organisations for the particular visions of radical nanotechnology originally associated with Eric Drexler. As the field of “nanoethics” grows, and brings in philosophers and social scientists, it’s inevitable that there will be a tendency to give these views more credibility than academic nanoscientists would like.

Scientists, then, can feel a certain powerlessness about the way the more radical visions of nanotechnology have taken root in the public sphere and retain their vigour. It may seem that there’s not a lot scientists can do about the media treats science stories; certainly no-one made much of a media career by underplaying the potential significance of scientific developments. This isn’t to say that within the constraints of the requirements of the media, scientists shouldn’t exercise responsibility and integrity. But perhaps the “economy of promises” is embedded more deeply in the scientific enterprise than this.

One class of document that is absolutely predicated on promises is the research proposal. As we see more and more pressure from funding agencies to do research with a potential economic impact, it’s inevitable that scientists will get into the habit of making more firmly what might be quite tenuous claims that their research will lead to spectacular outcomes. It’s perhaps also understandable that the conflict between this and more traditional academic values might lead to a certain cynicism; scientists have their own ways of justifying their work to themselves, which might mitigate any guilt they might feel about making inflated or unfeasible claims about the ultimate applications of their work. One way of justifying what might seem somewhat reckless claims about is the observation that science and technology have indeed produced huge impacts on society and the economy, even if these impacts were unforeseen at the time of the original research work. Thus one might argue to oneself that even though the claims made by researchers individually might be implausible, collectively one might have a great deal more confidence that the research enterprise as a whole will deliver important results.

Thus scientists may not be at all confident that their own work will have a big impact, but are confident that science in general will deliver big benefits. On the other hand, the public have long memories for promises that science and technology have made but failed to deliver (the idea that nuclear power would produce electricity “too cheap to meter” being one of the most notorious). This, if nothing else, suggests that the nanoscience community would do well to be responsible in what they promise.

1. http://nano.cancer.gov/about_alliance/cancer_nanotechnology_plan.asp
2. Berube, D. Nanohype, (Prometheus Books, Amherst NY, 2006)
3. Nordmann, A. NanoEthics 1, 31-46 (2007).

The Singularity gets a University

There’s been a huge amount of worldwide press coverage of the news that Ray Kurzweil has launched a “Singularity University”, to promote his vision (not to mention his books and forthcoming film) of an exponential growth in technology leading to computers more intelligent than humans and an end to aging and death. The coverage is largely uncritical – even the normally sober Financial Times says only that some critics think that the Singularity may be dangerous. To the majority of critics, though, the idea isn’t so much dangerous as completely misguided.

The Guardian, at least, quotes the iconic cognitive science and computer researcher Douglas Hofstadter as saying that Kurzweil’s ideas included “the craziest sort of dog excrement”, which is graphic, if not entirely illuminating. For a number of more substantial critiques, take a look at the special singularity issue of the magazine IEEE Spectrum, published last summer. Unsurprisingly, the IEEE blog takes a dim view.

Many of the press reports refer to the role of nanotechnology in Kurzweil’s vision of the singularity – according to the Guardian, for example, “Kurzweil predicts the creation of “nanobots” that will patrol our bloodstreams, repairing wear and tear as they go, and keeping our bodies perpetually young.” It was this vision that I criticised in my own contribution to the IEEE Singularity special, Rupturing the Nanotech Rapture; I notice that the main promoters of these ideas, Robert Freitas and Ralph Merkle, are among the founding advisors. At the time, I found it interesting in the responses to my article, that a number of self-identified transhumanists and singularitarians attempted to distance themselves from Kurzweil’s views, characterising them as atypical of their movement. It will be interesting to see how strenuously they now attempt to counter what seems to be a PR coup by Kurzweil.

It’s worth stressing that what’s been established isn’t really a university; it’s not going to do research and it won’t give degrees. Instead, it will offer 3-day, 10-day and 9 week courses, where, to quote from the website, one could imagine, for example, that issues such as global poverty, hunger, climate crisis could be studied from an interdisciplinary standpoint where the power of artificial intelligence, nanotechnology, genomics, etc are brought to bare in a cooperate fashion to seek solutions” (sic). Singularitarianism is an ideology, and this is a vehicle to promote it.

Among the partners in the venture, Google has succeeded in getting a huge amount of publicity for its $250,000 contribution, though whether it’s a wise cause for it to be associated with remains to be seen. As for the role of NASA and space entrepreneur Peter Diamandis, I leave the last word to that ever-reliable source of technology news, The Register: “There will be the traditional strong friendship between IT/net/AI enthusiasm and space-o-philia. In keeping with the NASA setting, SU will have strong involvement from the International Space University. ISU, founded in 1987 by Diamandis and others, is seen as having been key to the vast strides humanity has made in space technology and exploration in the last two decades”

Brownian motion and how to run a lottery (or a bank)

This entry isn’t really about nanotechnology at all; instead it’s a ramble around some mathematics that I find interesting, that suddenly seems to have become all too relevant in the financial crisis we find ourselves in. I don’t claim great expertise in finance, so my apologies in advance for any inaccuracies.

Brownian motion – the continuous random jiggling of nanoscale objects and structures that’s a manifestation of the random nature of heat energy – is a central feature of the nanoscale world, and much of my writing about nanotechnology revolves around how we should do nanoscale engineering in a way that exploits Brownian motion, in the way biology does. In this weekend’s magazine reading, I was struck to see some of the familiar concepts from the mathematics of Brownian motion showing up, not in Nature, but in an article in The Economist’s special section on the future of the finance – In Plato’s Cave, which explains how much of the financial mess we find ourselves in derives from the misapplication of these ideas. Here’s my attempt to explain, as simply as possible, the connection.

The motion of a particle undergoing Brownian motion can be described as a random walk, with a succession of steps in random directions. For every step taken in one direction, there’s an equal probability that the particle will go the same distance in the opposite direction, yet on average a particle doing a random walk does make some progress – the average distance gone grows as the square root of the number of steps. To see this for a simple situation, imagine that the particle is moving on a line, in one dimension, and either takes a step of one unit to the right (+1) or one unit to the left (-1), so we can track its progress just by writing down all the steps and adding them up, like this, for example: (+1 -1 +1 …. -1) . After N steps, on average the displacement (i.e. the distance gone, including a sign to indicate the direction) will be zero, but the average magnitude of the distance isn’t zero. To see this, we just look at the square root of the average value of the square of the displacement (since squaring the displacement takes away any negative signs). So we need to expand a product that looks something like (+1 -1 +1 …. -1) x (+1 -1 +1 …. -1). The first term of the first bracket times the first term of the second bracket is always +1 (since we either have +1 x +1 or -1 x -1), and the same is true for all the products of terms in the same position in both brackets. There are N of these, so this part of the product adds up to N. All the other terms in the expansion are one of (+1 x +1), (+1 x -1), (-1 x +1), (-1 x -1), and if the successive steps in the walk really are uncorrelated with each other these occur with equal probability so that on average adding all these up gives us zero. So we find that the mean squared distance gone in N steps is N. Taking the square root of this to get a measure of the average distance gone in N steps, we find this (root mean squared) distance is the square root of N.

The connection of these arguments to financial markets is simple. According the efficient market hypothesis, at any given time all the information relevant to the price of some asset, like a share, is already implicit in its price. This implies that the movement of the price with time is essentially a random walk. So, if you need to calculate what a fair value is for, say, an option to buy this share in a year’s time, you can do this equipped with statistical arguments about the likely movement of a random walk, of the kind I’ve just outlined. It is a smartened-up version of the theory of random walks that I’ve just explained that is the basis of the Black-Scholes model for pricing options, which is what made the huge expansion of trading of complex financial derivatives possible – as the Economist article puts it “The Black-Scholes options-pricing model was more than a piece of geeky mathematics. It was a manifesto, part of a revolution that put an end to the anti-intellectualism of American finance and transformed financial markets from bull rings into today’s quantitative powerhouses… The new model showed how to work out an option price from the known price-behaviour of a share and a bond. … . Confidence in pricing gave buyers and sellers the courage to pile into derivatives. The better that real prices correlate with the unknown option price, the more confidently you can take on any level of risk.”

Surely such a simple model can’t apply to a real market? Of course, we can develop more complex models that lift many of the approximations in the simplest theory, but it turns out that some of the key results of the theory remain. The most important result is the basic √N scaling of the expected movement. For example, my simple derivation assumed all steps are the same size – we know that some days, prices rise or fall a lot, sometimes not so much. So what happens if we have a random walk with step sizes that are themselves random. It’s easy to convince oneself that the derivation stays the same, but instead of adding up N occurrences of (-1 x -1) or (+1 x +1) we have N occurrences of (a x a), where the probability that the step size has value a is given by p(a). So we end up with the simple modification that the mean squared distance gone is N times the mean of the square of the step size. So this is a fairly simple modification, which, crucially, doesn’t affect the √N scaling.

But, and this is the big but, there’s a potentially troublesome hidden assumption here, which is that the distribution of step sizes actually has a well defined, well behaved mean squared value. We’d probably guess that the distribution of step sizes looks like a bell shaped curve, centred on zero and getting smaller the further away one gets from the origin. The familiar Gaussian curve fits the bill, and indeed such a curve is characterised by a well defined mean squared value which measures the width of the curve ( mathematically, a Gaussian is described by a distribution of step sizes a given by p(a) proportional to exp(-a/2s^2), which gives a root mean squared value of step size s). Gaussian curves are very common, for reasons described later, so this all looks very straightforward. But one should be aware that not all bell-shaped curves behave so well. Consider a distribution of step sizes a given by p(a) proportional to 1/(a^2+s^2). This curve (which is known in the trade as a Lorentzian), looks bell shaped and is characterised by a width s. But, when we try to find the average value of the square of the step size, we get an answer that diverges – it’s effectively infinite. The problem is that although the probability of getting a very large step goes to zero as the step size gets larger, it doesn’t go to zero very fast. Rather than the chance of a very large jump becoming exponentially small, as happens for a Gaussian, the chance goes to zero as the inverse square of the step size. This apparently minor difference is enough to completely change the character of the random walk. One needs entirely new mathematics to describe this sort of random walk (which is known as a Levy flight) – and in particular one ends up with a different scaling of the distance gone with the number of steps.

In the jargon, this kind of distribution is known as having a “fat tail”, and it was not factoring in the difference between a fat tailed distribution and a Gaussian or normal distribution that led the banks to so miscalculate their “value at risk”. In the words of the Economist article, the mistake the banks made “was to turn a blind eye to what is known as “tail risk”. Think of the banks’ range of possible daily losses and gains as a distribution. Most of the time you gain a little or lose a little. Occasionally you gain or lose a lot. Very rarely you win or lose a fortune. If you plot these daily movements on a graph, you get the familiar bell-shaped curve of a normal distribution (see chart 4). Typically, a VAR calculation cuts the line at, say, 98% or 99%, and takes that as its measure of extreme losses. However, although the normal distribution closely matches the real world in the middle of the curve, where most of the gains or losses lie, it does not work well at the extreme edges, or “tails”. In markets extreme events are surprisingly common—their tails are “fat”. Benoît Mandelbrot, the mathematician who invented fractal theory, calculated that if the Dow Jones Industrial Average followed a normal distribution, it should have moved by more than 3.4% on 58 days between 1916 and 2003; in fact it did so 1,001 times. It should have moved by more than 4.5% on six days; it did so on 366. It should have moved by more than 7% only once in every 300,000 years; in the 20th century it did so 48 times.”

But why should the experts in the banks have made what seems such an obvious mistake? One possibility goes back to the very reason why the Gaussian, or normal, distribution, is so important and seems so ubiquitous. This comes from a wonderful piece of mathematics called the central limit theorem. This says that if some random variable is made up from the combination of many independent variables, even if those variables aren’t themselves taken from a Gaussian distribution, their sum will be in the limit of many variables. So, given that market movements are the sum of the effects of lots of different events, the central limit theorem would tell us to expect the size of the total market movement to be distributed according to a Gaussian, even if the individual events were described by a quite different distribution. The central limit theorem has a few escape clauses, though, and perhaps the most important one arises from the way one approaches the limit of large numbers. Roughly speaking, the distribution converges to a Gaussian in the middle first. So it’s very common to find empirical distributions that look Gaussian enough in the middle, but still have fat tails, and this is exactly the point Mandelbrot is quoted as making about the Dow Jones.

The Economist article still leaves me puzzled, though as everything I’ve been describing has been well known for many years. But maybe well known isn’t the same as widely understood. Just like a lottery, the banks were trading the certainty of many regular small payments against a small probability of making a big payout. But, unlike the lottery, they didn’t get the price right, because they underestimated the probability of making a big loss. And now, their loss becomes the loss of the world’s taxpayers.

Public Engagement and Nanotechnology – the UK experience

What do the public think about nanotechnology? This is a question that has worried scientists and policy makers ever since the subject came to prominence. In the UK, as in other countries, we’ve seen a number of attempts to engage with the public around the subject. This article, written for an edited book about public engagement with science more generally in the UK, attempts to summarise the UK’s experience in this area.

From public understanding to public engagement

Nanotechnology emerged as a focus of public interest and concern in the UK in 2003, prompted, not least, by a high profile intervention on the subject from the Prince of Wales. This was an interesting time in the development of thinking about public engagement with science. A consensus about the underlying philosophy underlying the public understanding of science movement, dating back to the Bodmer report (PDF) in 1985, had begun to unravel. This was prompted, on the one hand, by sustained and influential critique of some of the assumptions underlying PUS from social scientists, particularly from the Lancaster school associated with Brian Wynne. On the other hand, the acrimony surrounding the public debates about agricultural biotechnology and the government’s handling of the bovine spongiform encephalopathy outbreak led many to diagnosis a crisis of trust between the public and the world of science and technology.

In response to these difficulties, a rather different view of the way scientists and the public should interact gained currency. According to the critique of Wynne and colleagues, the idea of “Public Understanding of Science” was founded on a “deficit model”, which assumed that the key problem in the relationship between the public and science was an ignorance on the part of the public both of the basic scientific facts and of the fundamental process of science, and if these deficits in knowledge were corrected the deficit in trust would disappear. To Wynne, this was both patronizing, in that it disregarded the many forms of expertise possessed by non-scientists, and highly misleading, in that it neglected the possibility that public concerns about new technologies might revolve around perceptions of the weaknesses of the human institutions that proposed to implement them, and not on technical matters at all.

The proposed remedy for the failings of the deficit model was to move away from an emphasis on promoting the public understanding of science to a more reflexive approach to engaging with the public, with an effort to achieve a real dialogue between the public and the scientific community. Coupled with this was a sense that the place to begin this dialogue was upstream in the innovation process, while there was still scope to steer its direction in ways which had broad public support. These ideas were succinctly summarised in a widely-read pamphlet from the think-tank Demos, “See-through science – why public engagement needs to move upstream ” .

Enter nanotechnology

In response to the growing media profile of nanotechnology, in 2003 the government commissioned the Royal Society and the Royal Academy of Engineering to carry out a wide-ranging study on nanotechnology and the health and safety, environmental, ethical and social issues that might stem from it. The working group included, in addition to distinguished scientists, a philosopher, a social scientist and a representative of an environmental NGO. The process of producing the report itself involved public engagement, with two in-depth workshops exploring the potential hopes and concerns that members of the public might have about nanotechnology.

The report – “Nanoscience and nanotechnologies: opportunities and uncertainties” – was published in 2004, and amongst its recommendations was a whole-hearted endorsement of the upstream public engagement approach: “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.”

Following this recommendation, a number of public engagement activities around nanotechnology have taken place in the UK. Two notable examples were Nanojury UK, a citizens’ jury which took place in Halifax in the summer of 2005, and Nanodialogues, a more substantial project which linked four separate engagement exercises carried out in 2006 and 2007.

Nanojury UK was sponsored jointly by the Cambridge University Nanoscience Centre and Greenpeace UK, with the Guardian as a media partner, and Newcastle University’s Policy, Ethics and Life Sciences Research Centre running the sessions. It was carried out in Halifax over eight evening sessions, with six witnesses drawn from academic science, industry and campaigning groups, considering a wide variety of potential applications of nanotechnology. Nanodialogues took a more focused approach; each of its four exercises, which were described as “experiments”, considered a single aspect or application area of nanotechnology. These included a very concrete example of a proposed use for nanotechnology – a scheme to use nanoparticles to remediate polluted groundwater – and the application of nanoscience in the context of a large corporation.

The Nanotechnology Engagement Group provided a wider forum to consider the lessons to be learnt from these and other public engagement exercises both in the UK and abroad; this reported in the summer of 2007 (the report is available here). This revealed a rather consistent message from public engagement. Broadly speaking, there was considerable excitement from the public about possible beneficial outcomes from nanotechnology, particularly in potential applications such as renewable energy, and medical applications. The more general value of such technologies in promoting jobs and economic growth were also recognised.

There were concerns, too. The questions that have been raised about potential safety and toxicity issues associated with some nanoparticles caused disquiet, and there were more general anxieties (probably not wholly specific to nanotechnology) about who controls and regulates new technology.

Reviewing a number of public engagement activities related to nanotechnology also highlighted some practical and conceptual difficulties. There was sometimes a lack of clarity about the purpose and role of public engagement; this leaves space for the cynical view that such exercises are intended, not to have a real influence on genuinely open decisions, but simply to add a gloss of legitimacy to decisions that have already been made. Related to this is the fact that bodies that might benefit from public engagement may lack institutional capacity and structure to benefit from it.

There are some more practical problems associated with the very idea of moving engagement “upstream” – the further the science is away from potential applications, the more difficult it can be both to communicate what can be complex issues, whose impact and implications may be subject to considerable disagreement amongst experts.

Connecting public engagement to policy

The big question to be asked about any public engagement exercise is “what difference has it made” – has there been any impact on policy? For this to take place there needs to be careful choice of the subject for the public engagement, as well as commitment and capacity on behalf of the sponsoring body or agency to use the results in a constructive way. A recent example from the Engineering and Physical Science Research Council offers an illuminating case study. Here, a public dialogue on the potential applications of nanotechnology to medicine and healthcare was explicitly coupled to a decision about where to target a research funding initiative, providing valuable insights that had a significant impact on the decision.

The background to this is the development of a new approach to science funding at EPSRC. This is to fund “Grand Challenge” projects, which are large scale, goal-oriented interdisciplinary activities in areas of societal need. As part of the “Nanoscience – engineering through to application” cross council priority area, it was decided to launch a Grand Challenge in the area of applications of nanotechnology to healthcare and medicine. This is potentially very wide area, so it was felt necessary to narrow the scope of the programme somewhat. The definition of the scope was carried out with the advice of a “Strategic Advisory Team” – an advisory committee with about a dozen experts on nanotechnology, drawn from academia and industry, and including international representation. Inputs to the decision were sought through a wider consultation with academics and potential research “users”, defined here as clinicians and representatives of the pharmaceutical and healthcare industries. This consultation included a “Town Meeting” open to the research and user communities.

This represents a fairly standard approach to soliciting expert opinion for a decision about science funding priorities. In the light of the experience of public engagement in the context of nanotechnology, it would be a natural question to ask whether one should seek public views as well. EPSRC’s Societal Issues Panel – a committee providing high-level advice on the societal and ethical context for the research EPSRC supports – enthusiastically endorsed the proposal that a public engagement exercise on nanotechnology for medicine and healthcare should be commissioned as an explicit part of the consultation leading up to the decision on the scope of the Grand Challenge in nanotechnology for medicine and healthcare.

A public dialogue on nanotechnology for healthcare was accordingly carried out during the Spring of 2008 by BMRB, led by Darren Bhattachary. This took the form of a pair of reconvened workshops in each of four locations – London, Sheffield, Glasgow and Swansea. Each workshop involved 22 lay participants, with care taken to ensure a demographic balance. The workshops were informed by written materials, approved by an expert Steering Committee; there was expert participation in each workshop from both scientists and social scientists. Personnel from the Research Council also attended; this was felt by many participants to be very valuable as a signal of the seriousness with which the organisation took the exercise.

The dialogues produced a number of rich insights that proved very useful in defining the scope of the final call (its report can be found here). In general, there was very strong support for medicine and healthcare as a priority area for the application of nanotechnology, and explicit rejection of an unduly precautionary approach. On the other hand, there were concerns about who benefits from the expenditure of public funds on science, and about issues of risk and the governance of technology. One overarching theme that emerged was a strong preference for new technologies that were felt to empower people to take control of their own health and lives.

One advantage of connecting a public dialogue with a concrete issue of funding priorities is that some very specific potential applications of nanotechnology could be discussed. As a result of the consultation with academics, clinicians and industry representatives, six topics had been identified for consideration. In each case, people at the workshops could identify both positive and negative aspects, but overall some clear preferences emerged. The use of nanotechnology to permit the early diagnosis of disease received strong support, as it was felt that this would provide information that would enable people to make changes to the way they live. The promise of nanotechnology to help treat serious diseases with fewer side effects by more effective targeting of drugs was also received with enthusiasm. On the other hand, the idea of devices that combine the ability to diagnose a condition with the means to treat it, via releasing therapeutic agents, caused some disquiet as being potentially disempowering. Other potential applications of nanotechnology which was less highly prioritised were its use to control pathogens, for example through nanostructured surfaces with intrinsic anti-microbial or anti-viral properties, nanostructured materials to help facilitate regenerative medicine, and the use of nanotechnology to help develop new drugs.

It was always anticipated that the results of this public dialogue would be used in two ways. Their most obvious role was as an input to the final decision on the scope of the Grand Challenge call, together with the outcomes of the consultations with the expert communities. It was the nanotechnology Strategic Advisory Team that made the final recommendation about the call’s scope, and in the event their recommendation was that the call should be in the two areas most favoured in the public dialogue – nanotechnology for early diagnosis and nanotechnology for drug delivery. In addition to this immediate impact, there is an expectation that the projects that are funded through the Grand Challenge should be carried out in a way that reflects these findings.

Public engagement in an evolving science policy landscape

The current interest in public engagement takes place at a time when the science policy landscape is undergoing larger changes, both in the UK and elsewhere in the world. We are seeing considerable pressure from governments for publicly funded science to deliver clearer economic and societal benefits. There is a growing emphasis on goal-oriented, intrinsically interdisciplinary science, with an agenda set by a societal and economic context rather than by an academic discipline – “mode II knowledge production” – in the phrase of Gibbons and his co-workers in their book The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. The “linear model” of innovation – in which pure, academic, science, unconstrained by any issues of societal or economic context, is held to lead inexorably through applied science and technological development to new products and services and thus increased prosperity, is widely recognised to be simplistic at best, neglecting the many feedbacks and hybridisations at every stage of this process.

These newer conceptions of “technoscience” or “mode II science” lead to problems of their own. If the agenda of science is to be set by the demands of societal needs, it is important to ask who defines those needs. While it is easy to identify the location of expertise for narrowly constrained areas of science defined by well-established disciplinary boundaries, it is much less easy to see who has the expertise to define the technically possible in strongly multidisciplinary projects. And as the societal and economic context of research becomes more important in making decisions about science priorities, one could ask who it is who will subject the social theories of scientists to critical scrutiny. These are all issues which public engagement could be valuable in resolving.

The enthusiasm for involving the public more closely in decisions about science policy may not be universally shared, however. In some parts of the academic community, it may be perceived as an assault on academic autonomy. Indeed, in the current climate, with demands for science to have greater and more immediate economic impact, an insistence on more public involvement might be taken as part of a two-pronged assault on pure science values. There are some who consider public engagement more generally as incompatible with the principles of representative democracy – in this view the Science Minister is responsible for the science budget and he answers to Parliament, not to a small group of people in a citizens’ jury. Representatives of the traditional media might not always be sympathetic, either, as they might perceive it as their role to be the gatekeepers between the experts and the public. It is also clear that public engagement, done properly, is expensive and time-consuming.

Many of the scientists who have been involved with public engagement, however, have reported that the experience is very positive. In addition to being reminded of the generally high standing of scientists and the scientific enterprise in our society, they are prompted to re-examine unspoken assumptions and clarify their aims and objectives. There are strong arguments that public deliberation and interaction can lead to more robust science policy, particularly in areas that are intrinsically interdisciplinary and explicitly coupled to meeting societal goals. What will be interesting to consider as more experience is gained is whether embedding public engagement more closely in the scientific process actually helps to produce better science.

Happy New Year

A kind friend, who reads a lot more science fiction than I do, gave me a copy of Charles Stross’s novel Accelerando for Christmas, on the grounds that after all my pondering on the Singularity last year I ought to be up to speed with what he considers the definitive fictional treatment. I’ve nearly finished it, and I must say I especially enjoyed the role of the uploaded lobsters. But it did make me wonder what Stross’s own views about the singularity are these days. The answer is on his blog, in this entry from last summer: That old-time new-time religion. I’m glad to see that his views on nanotechnology are informed by such a reliable source.

A belated Happy New Year to my readers.

Will nanotechnology lead to a truly synthetic biology?

This piece was written in response to an invitation from the management consultants McKinsey to contribute to a forthcoming publication discussing the potential impacts of biotechnology in the coming century. This is the unedited version, which is quite a lot longer than the version that will be published.

The discovery of an alien form of life would be discovery of the century, with profound scientific and philosophical implications. Within the next fifty years, there’s a serious chance that we’ll make this discovery, not by finding life on a distant planet or indeed by such aliens visiting us on earth, but by creating this new form of life ourselves. This will be the logical conclusion of using the developing tools of nanotechnology to develop a “bottom-up” version of synthetic biology, which instead of rearranging and redesigning the existing components of “normal” biology, as currently popular visions of synthetic biology propose, uses the inspiration of biology to synthesise entirely novel systems.

Life on earth is characterised by a stupendous variety of external forms and ways of life. To us, it’s the differences between mammals like us and insects, trees and fungi that seem most obvious, while there’s a vast variety of other unfamiliar and invisible organisms that are outside our everyday experience. Yet, underneath all this variety there’s a common set of components that underlies all biology. There’s a common genetic code, based on the molecule DNA, and in the nanoscale machinery that underlies the operation of life, based on proteins, there are remarkable continuities between organisms that on the surface seem utterly different. That all life is based on the same type of molecular biology – with information stored in DNA, transcribed through RNA to be materialised in the form of machines and enzymes made out of proteins – reflects the fact that all the life we know about has evolved from a common ancestor. Alien life is a staple of science fiction, of course, and people have speculated for many years that if life evolved elsewhere it might well be based on an entirely different set of basic components. Do developments of nanotechnology and synthetic biology mean that we can go beyond speculation to experiment?

Certainly, the emerging discipline of synthetic biology is currently attracting excitement and foreboding in equal measure. It’s important to realise, though, that in the most extensively promoted visions of synthetic biology now, what’s proposed isn’t making entirely new kinds of life. Rather than aiming to make a new type of wholly synthetic alien life, what is proposed is to radically re-engineer existing life forms. In one vision, it is proposed to identify in living systems independent parts or modules, that could be reassembled to achieve new, radically modified organisms that can deliver some desired outcome, for example synthesising a particularly complicated molecule. In one important example of this approach, researchers at Lawrence Berkeley National Laboratory developed a strain of E. coli that synthesises a precursor to artmesinin, a potent (and expensive) anti-malarial drug. In a sense, this field is a reaction to the discovery that genetic modification of organisms is more difficult than previously thought; rather than being able to get what one wants from an organism by altering a single gene, one often needs to re-engineer entire regulatory and signalling pathways. In these complex processes, protein molecules – enzymes – essentially function as molecular switches, which respond to the presence of other molecules by initiating further chemical changes. It’s become commonplace to make analogies between these complex chemical networks and electronic circuits, and in this analogy this kind of synthetic biology can be thought of as the wholesale rewiring of the (biochemical) circuits which control the operation of an organism. The well-publicised proposals of Craig Venter are even more radical – their project is to create a single-celled organism that has been slimmed down to have only the minimal functions consistent with life, and then to replace its genetic material with a new, entirely artificial, genome created in the lab from synthetic DNA. The analogy used here is that one is “rebooting” the cell with a new “operating system”. Dramatic as this proposal sounds, though, the artificial life-form that would be created would still be based on the same biochemical components as natural life. It might be synthetic life, but it’s not alien.

So what would it take to make a synthetic life-form that was truly alien? In principle, it seems difficult to argue that this wouldn’t be possible in principle – as we learn more about the details of the way cell biology works, we can see that it is intricate and marvellous, but in no sense miraculous – it’s based on machinery that operates on principles consistent with the way we know physical laws operate on the nano-scale. These principles, it should be said, are very different to the ones that underlie the sorts of engineering we are used to on the macro-scale; nanotechnologists have a huge amount to learn from biology. But we are already seeing very crude examples of synthetic nanostructures and devices that use some of the design principles of biology – designed molecules that self-assemble to make molecular bags that resemble cell membranes; pores that open and close to let molecules in and out of these enclosures, molecules that recognise other molecules and respond by changes in shape. It’s quite conceivable to imagine these components being improved and integrated into systems. One could imagine a proto-cell, with pores controlling traffic of molecules in and out of it, containing an network of molecules and machines that together added up to a metabolism, taking in energy and chemicals from the environment and using them to make the components needed for the system to maintain itself, grow and perhaps reproduce.

Would such a proto-cell truly constitute an artificial alien-life form? The answer to this question, of course, depends on how we define life. But experimental progress in this direction will itself help answer this thorny question, or at least allow us to pose it more precisely. The fundamental problem we have when trying to talk about the properties of life in general, is that we only know about a single example. Only when we have some examples of alien life will it be possible to talk about the general laws, not of biology, but of all possible biologies. The quest to make artificial alien life will teach us much about the origins of our kind of life. Experimental research into the origins of life consists of an attempt to rerun the origins of our kind of life in the early history of earth, and is in effect an attempt to create artificial alien life from those molecules that can plausibly be argued to have been present on the early earth. Using nanotechnology to make a functioning proto-cell should be an easier task than this, as we don’t have to restrict ourselves to the kinds of materials that were naturally occurring on the early earth.

Creating artificial alien life would be a breathtaking piece of science, but it’s natural to ask whether it would have any practical use. The selling point of the most currently popular visions of synthetic biology is that they will permit us to do difficult chemical transformations in much more effective ways – making hydrogen from sunlight and water, for example, or making complex molecules for pharmaceutical uses. Conventional life, including the modifications proposed by synthetic biology, operates only in a restricted range of environments, so it’s possible to imagine that one could make a type of alien life that operated in quite different environments – at high temperatures, in liquid metals, for example – opening up entirely different types of chemistry. These utilitarian considerations, though, pale in comparison to what would be implied more broadly if we made a technology that had a life of its own.