Exploiting evolution for nanotechnology

In my August Physics World article, The future of nanotechnology, I argued that fears of the loss of control of self-replicating nanobots – resulting in a plague of grey goo – were unrealistic, because it was unlikely that we would be able to “out-engineer evolution”. This provoked this interesting response from a reader, reproduced here with his permission:

Dr. Jones,
I am a graduate student at MIT writing an article about the work of Angela Belcher, a professor here who is coaxing viruses to assemble transistors. I read your article in Physics World, and thought the way you stated the issue as a question of whether we can “out-engineer evolution” clarified current debates about the dangers of nanotechnology. In fact, the article I am writing frames the debate in your terms.

I was wondering whether Belcher’s work might change the debate somewhat. She actually combines evolution and engineering. She directs the evolution of peptides, starting with a peptide library, until she obtains peptides that cling to semiconductor materials or gold. Then she genetically engineers the viruses to express these peptides so that, when exposed to semiconductor precursors, they coat themselves with semiconductor material, forming a single crystal around a long, cylindrical capsid. She also has peptides expressed at the ends that attach to gold electrodes. The combination of the semiconducting wire and electrodes forms a transistor.

Now her viruses are clearly not dangerous. They require a host to replicate, and they can’t replicate once they’ve been exposed to the semiconducting materials or electrodes. They cannot lead to “gray goo.”

Does her method, however, suggest the possibility that we can produce things we could never engineer? Might this lead to molecular machines that could actually compete in the environment?

Any help you could provide in my thinking through this will be appreciated.

Thank you,

Kevin Bullis

Here’s my reply:
Dear Kevin,
You raise an interesting point. I’m familiar with Angela Belcher’s work, which is extremely elegant and important. I touch a little bit on this approach, in which evolution is used in a synthetic setting as a design tool, in my book “Soft Machines”. At the molecular level the use of some kind of evolutionary approach, whether executed at a physical level, as in Belcher’s work, or in computer simulation, seems to me to be unavoidable if we’re going to be able to exploit phenomena like self-assembly to the full.

But I still don’t think it fundamentally changes the terms of the debate. I think there are two separate issues:

1. is cell biology close to optimally engineered for the environment of the (warm, wet) nanoworld?

2. how can we best use design principles learnt from biology to make useful synthetic nanostructures and devices?

In this context, evolution is an immensely powerful design method, and it’s in keeping with the second point that we need to learn to use it. But even though using it might help us approach biological levels of optimality, one can still argue that it won’t help us surpass it.

Another important point revolves around the question of what is being optimised, or in Darwinian terms, what constitutes “fitness”. In our own nano-engineering, we have the ability to specify what is being optimised, that is, what constitutes “fitness”. In Belcher’s work, for example, the “fittest” species might be the one that binds most strongly to a particular semiconductor surface. This is quite different as a measure of fitness than the ability to compete with bacteria in the environment, and what is optimal for our own engineering purposes is unlikely to be optimal for the task of competing in the environment.

Best wishes,

To which Kevin responded:

It does seem likely that engineering fitness would not lead to environmental fitness. Belcher’s viruses, for example, would seem to have
a hard time in the real world, especially once coated in a semiconductor crystal. What if, however, someone made environmental fitness a goal? This does not seem unimaginable. Here at MIT engineers have designed sensors for the military that provide real-time data about the environment. Perhaps someday the military will want devices that can survive and multiply. (The military is always good for a scare. Where would science fiction be without thoughtless generals?)

This leads to the question of whether cells have an optimal design, one that can’t be beat. It may be that such military sensors will not be able to compete. Belcher’s early work had to do with abalone, which evolved a way to transform chalk into a protective lining of nacre. Its access to chalk made an adaptation possible that, presumably, gave it a competitive advantage. Might exposure to novel environments give organisms new tools for competing? I think now also of invasive species overwhelming existing ones. These examples, I realize, do not approach gray goo. As far as I know we’ve nothing to fear from abalone. Might they suggest, however, that
novel cellular mechanisms or materials could be more efficient?


To which I replied:
It’s an important step forward to say that this isn’t going to happen by accident, but as you say, this does leave the possibility of someone doing it on purpose (careless generals, mad scientists…). I don’t think one can rule this out, but I think our experience says that for every environment we’ve found on earth (from what we think of as benign, e.g. temperate climates on the earth’s surface, to ones that we think of as very hostile, e.g. hot springs and undersea volcanic vents) there’s some organism that seems very well suited for it (and which doesn’t work so well elsewhere). Does this mean that such lifeforms are always absolutely optimal? A difficult question. But moving back towards practicality, we are so far from understanding how life works at the mechanistic level that would be needed to build a substitute from scratch, that this is a remote question. It’s certainly much less frightening than the very real possibility of danger from modifying existing life-forms, for example by increasing the virulence of pathogens.

Best wishes,

Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

If biology is so smart, how come it never invented the mobile phone/iPod/Ford Fiesta?

Chris Phoenix, over on the CRN blog, in reply to a comment of mine, asked an interesting question that I replied at such length to that I feel moved to recycle it here. His question was, given that graphite is a very strong material, and given that graphite sheets of more than 200 carbon atoms have been synthesized with wet chemistry, why is it that life never discovered graphite? From this he questioned the degree to which biology could be claimed to have found optimum or near optimum solutions to the problems of engineering at the nanoscale. I answered his question (or at least commented on it) in three parts.

Firstly, I don’t think that biology has solved all problems it faces optimally – it would be absurd to suggest this. But what I do believe is that the closer to the nanoscale one is, the more optimal the solutions are. This is obvious when one thinks about it; the problems of making nanoscale machines were the first problems biology had to solve, it had the longest to do it, and at this point the it was closest to starting from a clean slate. In evolving more complex structures (like the eye) biology has to coopt solutions that were evolved to solve some other problem. I would argue that many of the local maxima that evolution gets trapped in are actually near optimal solutions of nanotechnology problems that have to be sub-optimally adapted for larger scale operation. As single molecule biophysics progresses and indicates just how efficient many biological nanomachines are this view I think gets more compelling.

Secondly, and perhaps following on from this, the process of optimising materials choice is very rarely, either in biology or human engineering, simply a question of maximising a single property like strength. One has to consider a whole variety of different properties, strength, stiffness, fracture toughness, as well as external factors such as difficulty of processing, cost (either in money for humans or in energy for biology), and achieve the best compromise set of properties to achieve fitness for purpose. So the question you should ask is, in what circumstances would the property of high strength be so valuable for an organism, particularly a nanoscale organism, that all other factors would be overruled. I can’t actually think of many, as organisms, particularly small ones, generally need toughness, resilience and self-healing properties rather than outright strength. And the strong and tough materials they have evolved (e.g. the shells of diatoms, spider silk, tendon) actually have pretty good properties for their purposes.

Finally, don’t forget that strength isn’t really an intrinsic property of materials at all. Stiffness is determined by the strength of the bonds, but strength is determined by what defects are present. So you have to ask, not whether evolution could have developed a way of making graphite, but whether it could have developed a way of developing macroscopic amounts of graphite free of defects. The latter is a tall order, as people hoping to commercialise nanotubes for structural applications are going to find out. In comparison the linear polymers that biology uses when it needs high strength are actually much more forgiving, if you can work out how to get them aligned – it’s much easier to make a long polymer with no defects than it is to make a two or three dimensional structure with a similar degree of perfection.

The Lion lies down with the Lamb

The recent report from P. Guo at Purdue that RNA can be used as the building block for nanostructures(original article in Nano Letters, subscription probably required; news report) has generated rare unanimity between the Drexlerian and the nanobusiness wings of the nanotechnology movement. Remarkably, the achievement has united the Tom and Jerry of the nanotechnology blog world, TNTlog, and the Center for Responsible Nanotechnology. Is this excitement warranted?

Very much so, in my view. The advantage of DNA as a nanotechnological building block (as demonstrated in Seeman’s work) is that the self-assembly process between base-pairs that creates the duplex, double helix structure, is very straightforward to understand and model. This means that the design process, in which one deduces what sequence of bases is required to produce a given 3d structure, is highly tractable. Proteins exhibit a much richer range of useful three dimensional structures, but rational design involves a solution of the protein folding problem, which still remains elusive. RNA offers a middle way; RNA self-assembly is still governed by straightward base pairing interactions involving four bases in two complementary pairs. But RNA, unlike DNA, can fold and form loops and hairpins, giving a much richer range of possible 3d structures. Thus, using RNA, we could get the best of both worlds – the richness of potential self-assembled structures of proteins, with the computational tractability of DNA.

We should remember that neither DNA nanotechnology nor RNA nanotechnology is likely to yield mass-market products any time soon – nucleic acids are delicate molecules that remain enormously expensive. But these lines of research are just the sort of avenue that publically funded nanoscience should be supporting – visionary stuff that can excite both the Drexlerian radicals and the pragmatic nano-businessmen.