Exploiting evolution for nanotechnology

In my August Physics World article, The future of nanotechnology, I argued that fears of the loss of control of self-replicating nanobots – resulting in a plague of grey goo – were unrealistic, because it was unlikely that we would be able to “out-engineer evolution”. This provoked this interesting response from a reader, reproduced here with his permission:

Dr. Jones,
I am a graduate student at MIT writing an article about the work of Angela Belcher, a professor here who is coaxing viruses to assemble transistors. I read your article in Physics World, and thought the way you stated the issue as a question of whether we can “out-engineer evolution” clarified current debates about the dangers of nanotechnology. In fact, the article I am writing frames the debate in your terms.

I was wondering whether Belcher’s work might change the debate somewhat. She actually combines evolution and engineering. She directs the evolution of peptides, starting with a peptide library, until she obtains peptides that cling to semiconductor materials or gold. Then she genetically engineers the viruses to express these peptides so that, when exposed to semiconductor precursors, they coat themselves with semiconductor material, forming a single crystal around a long, cylindrical capsid. She also has peptides expressed at the ends that attach to gold electrodes. The combination of the semiconducting wire and electrodes forms a transistor.

Now her viruses are clearly not dangerous. They require a host to replicate, and they can’t replicate once they’ve been exposed to the semiconducting materials or electrodes. They cannot lead to “gray goo.”

Does her method, however, suggest the possibility that we can produce things we could never engineer? Might this lead to molecular machines that could actually compete in the environment?

Any help you could provide in my thinking through this will be appreciated.

Thank you,

Kevin Bullis

Here’s my reply:
Dear Kevin,
You raise an interesting point. I’m familiar with Angela Belcher’s work, which is extremely elegant and important. I touch a little bit on this approach, in which evolution is used in a synthetic setting as a design tool, in my book “Soft Machines”. At the molecular level the use of some kind of evolutionary approach, whether executed at a physical level, as in Belcher’s work, or in computer simulation, seems to me to be unavoidable if we’re going to be able to exploit phenomena like self-assembly to the full.

But I still don’t think it fundamentally changes the terms of the debate. I think there are two separate issues:

1. is cell biology close to optimally engineered for the environment of the (warm, wet) nanoworld?

2. how can we best use design principles learnt from biology to make useful synthetic nanostructures and devices?

In this context, evolution is an immensely powerful design method, and it’s in keeping with the second point that we need to learn to use it. But even though using it might help us approach biological levels of optimality, one can still argue that it won’t help us surpass it.

Another important point revolves around the question of what is being optimised, or in Darwinian terms, what constitutes “fitness”. In our own nano-engineering, we have the ability to specify what is being optimised, that is, what constitutes “fitness”. In Belcher’s work, for example, the “fittest” species might be the one that binds most strongly to a particular semiconductor surface. This is quite different as a measure of fitness than the ability to compete with bacteria in the environment, and what is optimal for our own engineering purposes is unlikely to be optimal for the task of competing in the environment.

Best wishes,

To which Kevin responded:

It does seem likely that engineering fitness would not lead to environmental fitness. Belcher’s viruses, for example, would seem to have
a hard time in the real world, especially once coated in a semiconductor crystal. What if, however, someone made environmental fitness a goal? This does not seem unimaginable. Here at MIT engineers have designed sensors for the military that provide real-time data about the environment. Perhaps someday the military will want devices that can survive and multiply. (The military is always good for a scare. Where would science fiction be without thoughtless generals?)

This leads to the question of whether cells have an optimal design, one that can’t be beat. It may be that such military sensors will not be able to compete. Belcher’s early work had to do with abalone, which evolved a way to transform chalk into a protective lining of nacre. Its access to chalk made an adaptation possible that, presumably, gave it a competitive advantage. Might exposure to novel environments give organisms new tools for competing? I think now also of invasive species overwhelming existing ones. These examples, I realize, do not approach gray goo. As far as I know we’ve nothing to fear from abalone. Might they suggest, however, that
novel cellular mechanisms or materials could be more efficient?


To which I replied:
It’s an important step forward to say that this isn’t going to happen by accident, but as you say, this does leave the possibility of someone doing it on purpose (careless generals, mad scientists…). I don’t think one can rule this out, but I think our experience says that for every environment we’ve found on earth (from what we think of as benign, e.g. temperate climates on the earth’s surface, to ones that we think of as very hostile, e.g. hot springs and undersea volcanic vents) there’s some organism that seems very well suited for it (and which doesn’t work so well elsewhere). Does this mean that such lifeforms are always absolutely optimal? A difficult question. But moving back towards practicality, we are so far from understanding how life works at the mechanistic level that would be needed to build a substitute from scratch, that this is a remote question. It’s certainly much less frightening than the very real possibility of danger from modifying existing life-forms, for example by increasing the virulence of pathogens.

Best wishes,

2005 – the year of nanotechnology (yet again)

When I was a small boy I could tell when Christmas was imminent; sometime around mid November the annuals published by my favourite comics appeared in the newsagents. There then followed six weeks of agonised waiting until the Beano annual appeared under the Christmas tree. Things are different now. My favourite comic characters now seem to have become leading politicians. I don’t have to wait until Christmas anymore, because I can just buy the annual myself, but sadly the annual I seem to be buying isn’t from the Beano but from the Economist.

The World in 2005 is written with the Economist’s usual mix of self-confidence and breezy optimism (I thought this prediction – “the Middle East will end the year looking either much better or far worse” – is an absolute classic of the genre). Nanotechnology gets a little box, predicting that 2005 will see the first year in which corporations outspend governments in nanotechnology, and that this will be the year in which we will see the arrival of many more nanotechnology-enabled products. The usual suspects are paraded – nano-strengthened tennis raquets, stain-resistant fabrics and self-cleaning window glass. Perhaps more interestingly, the article points to NEC’s announcement of a fuel-cell powered notebook PC, using carbon nanotubes in the electrodes. Other reports, however, suggest that this technology won’t be commercialised until 2007. Nonetheless, this does support the idea that energy technologies will be an important and potentially transformative application of near to medium-term nanotechnology.

Even if Drexler is wrong, nanotechnology will have far-reaching impacts

What are the possible impacts of nanotechnology? The answer you get depends on which of nanotechnology’s warring camps you ask. On the negative side, the supporters of Drexler paint a chilling picture of economies dislocated, overwhelming military hegemony for the technology’s developers, and at worst global catastrophe. The nanotechnology mainstream in science and business doesn’t accept that Drexler’s vision is feasible; given this there’s a tendency in these circles to downplay the seriousness of nanotechnology’s potential negative consequences. In this, quite widespread, view, there may be some worries about the toxicity of nanoparticles to be investigated, but by and large we can expect business as usual. I think both views are wrong.

As I’ve made clear in many places, I doubt that Drexler’s vision of nanotechnology will come to pass. But when we come to discuss the impacts that nanotechnology might have, this matters less than one might think. I disagree with the analysis of Drexlerian groups like the Centre for Responsible Nanotechnology on many economic grounds as well as scientific ones, but there are a surprising number of places where I think that what they predict as impacts of Drexlerian nanotechnology will happen anyway. In fact, quite a few of these impacts are underway right now.

The debate about the social consequences of nanotechnology is becoming polarised in exactly the same way as the technical debate. This is unhealthy and unnecessary; many of the impacts of technology are independent of the precise form that the technology takes. If computing power, in 30 years, is much cheaper and much more ubiquitous than it is now, then the social consequences that follow from that don’t depend on whether those computers are powered by molecular electronics, quantum computing or Drexler’s rod logic.

Nanobusiness and nanoscientists need to raise themselves above their next grant proposal and funding round and start to think through the ways in which nanotechnology will be changing the world on a 20-30 years timescale. Prediction is very difficult, especially about the future (to quote Niels Bohr). But we do need to be thinking about bigger issues than how to regulate the disposal of nanotube enabled tennis rackets, important though it is to get those things right. The development of ubiquitous and ambient computing, the blurring of the line between human and machine; these are big issues that do deserve attention. And on the positive side, it’s going to be increasingly difficult to sell the huge outlays of taxpayers money by referring to the benefits of better cosmetics, important markets though those are. It’s not as though humanity isn’t facing some big challenges, and nanotechnology, if directed appropriately, could make some big positive impacts. Moving to a sustainable energy economy is one of our biggest challenges, and this is an area in which Richard Smalley has been rightly emphasising the transformational contributions incremental and evolutionary nanotechnologies can make .

Meanwhile, followers of Drexler are in danger of finding themselves in denial about the potential impact of ordinary, evolutionary nanotechnology, because of their devotion to their brand of nanotechnology’s one true path. As they continue to insist that the development of true nanotechnology is being thwarted for short-sighted political reasons, they may overlook the far-reaching changes that evolutionary nanotechnology will bring. It would be ironic if, in thirty years, the Drexlerites find themselves still waiting for a revolution that’s already happened.

Small talk about nanotechnology at the Royal Institution

A debate about nanotechnology last Monday at the Royal Institution was run in association with a project called Small Talk, which is planning to run dialogue events about nanotechnology across the UK. This project is a collaboration between the leading organisations in science communication in the UK, The Royal Institution, the British Association, a group of science centres and the Cheltenham Festival of Science. For a science communication organisation they are being a bit reticent, in that they haven’t yet got a web-site up, but I guess we can take this as evidence that nanotechnology has come to the top of the agenda of the science communication professionals.

To be honest, I thought that last Monday’s event actually highlighted some of the problems that this enterprise faces. There are a number of different levels at which one can talk about nanotechnology. You can have a straight discussion about what the technology actually is and what it is likely to become in the near future. At this level, there’s going to be some work to do explaining the basic science, as well as some mentions of the traditional exhibits of contemporary nano-business: tennis rackets, sun-cream, stain resistant trousers etc. You can discuss the debate about what the future holds for the technology, and what the prospects are for the Drexlerian visions. And you can also discuss how one ought to run debates about science and technology and what the right relationship between the public and scientists should be. It’s easy to end up trying to talk about all three, and the result of this is confusion and an unfocused discussion.

While I do applaud James Wilsdon’s notion of an upstream debate, in which people get to discuss technology before it actually arrives, it does take for granted that there are some common assumptions about what the technology actually is. We don’t yet have that common ground when we talk about nanotechnology.

Molecular devices and machines

This month’s edition of Physics World (this is the monthly magazine of the UK’s Institute of Physics) has an interesting article on molecular devices and machines. The article, (unfortunately only available in full to subscribers), is by Vincenzo Balzani and coworkers from the University of Bologna, and is a nice overview of the supramolecular chemistry approach to making nanomachines at a single molecule level. This is the group that published, in collaboration with Fraser Stoddart from UCLA, a report earlier this year about a molecular elevator.

One minor notable point about this article (and this is apparent from the teaser that non-subscribers can read from the link above): it has the warmest remarks about Drexler that I have seen in an article directed at scientists for many years.

What is this thing called nanotechnology? Part 3. Three phases of nanotechnology

Here I continue my attempt to define what is meant by the term nanotechnology. In Part 1 I tried to define the relevant length-scale, the nanoscale, and in Part 2 I made the distinction between nanoscience and nanotechnology. This leaves us with a definition of nanotechnology that includes any branch of technology that results from our ability to control and manipulate matter on the nanoscale.

This is impossibly broad, and a lot of trouble continues to be caused by people confusing the many very different technologies that are grouped together in this word nanotechnology. I’ve found it useful to break the definition up in the following way (of course the boundaries between the categories are porous and arbitrary):

  • Incremental nanotechnology involves improving the properties of many materials by controlling their nanoscale structure. Plastics, for example, can be reinforced by nanoscale clay particles, making them stronger, stiffer and more chemically resistant. Cosmetics can be formulated in which the oil phase is much more finely dispersed, improving the feel of the product on the skin. Textiles can be coated with nanoscale layers to alter their wetting properties, making them stain-resistant. This kind of nanotechnology is essentially a continuation of existing trends in disciplines like materials science, colloid science and powder technology. Most commercially available products that are said to be based on nanotechnology fall into this category. The science underlying them is sound and the products often are big improvements on what has gone before. However, they do not really represent a decisive break from existing products, many of which already involve nanotechnology as defined this way, even if they aren’t marketed as owing anything to nanotechnology.
  • Evolutionary nanotechnology involves scaling existing technologies down in size to the nanoscale. Here we generally move beyond simple materials that have been redesigned at the nanoscale to functional devices. Such devices could, for example, sense the environment, process information, or convert energy from one form to another. They include nanoscale sensors, which exploit the huge surface area of nanostructured materials like carbon nanotubes to detect environmental contaminants or biochemicals. Other products of evolutionary nanotechnology are semiconductor nanostructures such as quantum dots and quantum wells which are be used to build better solid-state lasers. Another, less well known but potentially important area is in the development of nano-structures that can wrap up molecules and release them under some stimulus; the most obvious use for these is in drug delivery.
  • Radical nanotechnology involves sophisticated nanoscale machines, operating with nanoscale precision. K. Eric Drexler pointed out in Engines of Creation, that we have an existence proof for such a technology in cell biology, which gives us many remarkable examples of such nanoscale machines. Drexler sketched out, in Nanosystems, one particular route to achieve a radical nanotechnology, which involved a mechanical engineering paradigm executed largely in diamond-like carbon. This is often referred to as molecular nanotechnology or MNT. It’s important to realise that MNT isn’t the only conceivable radical nanotechnology. Bionanotechnology refers to an approach in which biological nanomachines are reassembled in artifical contexts, while one can imagine various biomimetic approaches to radical nanotechnology in which design principles from biology are executed in synthetic materials. This sort of approach is the subject of my book Soft Machines.
  • My modest achievement

    At the Royal Institution debate on nanotechnology this evening (about which I’ll write more later) one comment by James Wilsdon, of the think-tank Demos, stuck in my mind:

    “Richard Jones’s work, in Soft Machines, has complexified and problemetised the debate about radical nanotechnology.”

    He assured me afterwards that this was a good thing.

    Soft Machines published in the USA

    Soft Machines: nanotechnology and life , by Richard A.L. Jones, is at last published in the USA on October 31 by Oxford University Press. The book is aimed at the general reader, and it explains why things behave differently at the nanoscale to the way they behave at familiar human scales. The book argues that the design principles used by cell biology – the best example we have of a sophisticated working nanotechnology – are particularly well suited to the unfamiliar way physics works at the nanoscale, and that we should try to use the same principles in nanotechnology. Topics discussed include self-assembly in biological and non-biological systems, natural and synthetic molecular motors, molecular electronics and chemical computing.

    Cover of Soft Machines

    Planet Earth calling Houston

    Rice University’s new initiative aimed at generating a positive public dialogue about nanotechnology seems to have run into a little difficulty before it’s even got going. I don’t normally have a great deal of sympathy for the ETC group, but their action in withdrawing from this organisation is entirely reasonable and understandable. The fact that funding for the organisation comes largely from industry sends a very negative message about how impartial it is likely to be, but the problems run deeper. The agenda for the council seems to be dominated by questions of nanoparticle toxicity and regulation. It is not just Drexlerites and an anti-globalisation activists who think that the potential implications of nanotechnology, both positive and negative, run a lot deeper than this one immediate, short term issue.

    As for the rest of us outside the USA, we can only look on at the “International” in the International Council on Nanotechnology with the same wry smile that the “World” in the baseball World Series provokes. Realists appreciate that the FDA has an influence well beyond the shores of the USA, where its formal writ runs. But could we not have some recognition that there are other sovereign domains outside the USA whose regulatory authorities also might have something to say about nanotechnology? And that this kind of venture might have something to learn from initiatives in other countries, like the UK’s Royal Society study, which somehow managed to avoid the sort of inept mishandling that has already led to such unnecessary polarisation.

    Nanotechnology at the Royal Institution

    For anyone in London and at a loose end next Monday, 1 November, there’s an event on at the Royal Institution from 7 pm to 8.30 pm: Nanotechnology: can something so tiny promise something so big?. It’s a debate about nanotechnology and its potential, chaired by the science writer Philip Ball, and featuring myself, Ray Oliver, an industrial nanotechnologist and one of the authors of the recent Royal Society report, and James Wilsdon, from the thinktank Demos, whose recent pamphlet about ways to engage the public about new technologies such as nanotechnology, See-through Science, I wrote about below. It should be an interesting evening.