Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

If biology is so smart, how come it never invented the mobile phone/iPod/Ford Fiesta?

Chris Phoenix, over on the CRN blog, in reply to a comment of mine, asked an interesting question that I replied at such length to that I feel moved to recycle it here. His question was, given that graphite is a very strong material, and given that graphite sheets of more than 200 carbon atoms have been synthesized with wet chemistry, why is it that life never discovered graphite? From this he questioned the degree to which biology could be claimed to have found optimum or near optimum solutions to the problems of engineering at the nanoscale. I answered his question (or at least commented on it) in three parts.

Firstly, I don’t think that biology has solved all problems it faces optimally – it would be absurd to suggest this. But what I do believe is that the closer to the nanoscale one is, the more optimal the solutions are. This is obvious when one thinks about it; the problems of making nanoscale machines were the first problems biology had to solve, it had the longest to do it, and at this point the it was closest to starting from a clean slate. In evolving more complex structures (like the eye) biology has to coopt solutions that were evolved to solve some other problem. I would argue that many of the local maxima that evolution gets trapped in are actually near optimal solutions of nanotechnology problems that have to be sub-optimally adapted for larger scale operation. As single molecule biophysics progresses and indicates just how efficient many biological nanomachines are this view I think gets more compelling.

Secondly, and perhaps following on from this, the process of optimising materials choice is very rarely, either in biology or human engineering, simply a question of maximising a single property like strength. One has to consider a whole variety of different properties, strength, stiffness, fracture toughness, as well as external factors such as difficulty of processing, cost (either in money for humans or in energy for biology), and achieve the best compromise set of properties to achieve fitness for purpose. So the question you should ask is, in what circumstances would the property of high strength be so valuable for an organism, particularly a nanoscale organism, that all other factors would be overruled. I can’t actually think of many, as organisms, particularly small ones, generally need toughness, resilience and self-healing properties rather than outright strength. And the strong and tough materials they have evolved (e.g. the shells of diatoms, spider silk, tendon) actually have pretty good properties for their purposes.

Finally, don’t forget that strength isn’t really an intrinsic property of materials at all. Stiffness is determined by the strength of the bonds, but strength is determined by what defects are present. So you have to ask, not whether evolution could have developed a way of making graphite, but whether it could have developed a way of developing macroscopic amounts of graphite free of defects. The latter is a tall order, as people hoping to commercialise nanotubes for structural applications are going to find out. In comparison the linear polymers that biology uses when it needs high strength are actually much more forgiving, if you can work out how to get them aligned – it’s much easier to make a long polymer with no defects than it is to make a two or three dimensional structure with a similar degree of perfection.

The Lion lies down with the Lamb

The recent report from P. Guo at Purdue that RNA can be used as the building block for nanostructures(original article in Nano Letters, subscription probably required; news report) has generated rare unanimity between the Drexlerian and the nanobusiness wings of the nanotechnology movement. Remarkably, the achievement has united the Tom and Jerry of the nanotechnology blog world, TNTlog, and the Center for Responsible Nanotechnology. Is this excitement warranted?

Very much so, in my view. The advantage of DNA as a nanotechnological building block (as demonstrated in Seeman’s work) is that the self-assembly process between base-pairs that creates the duplex, double helix structure, is very straightforward to understand and model. This means that the design process, in which one deduces what sequence of bases is required to produce a given 3d structure, is highly tractable. Proteins exhibit a much richer range of useful three dimensional structures, but rational design involves a solution of the protein folding problem, which still remains elusive. RNA offers a middle way; RNA self-assembly is still governed by straightward base pairing interactions involving four bases in two complementary pairs. But RNA, unlike DNA, can fold and form loops and hairpins, giving a much richer range of possible 3d structures. Thus, using RNA, we could get the best of both worlds – the richness of potential self-assembled structures of proteins, with the computational tractability of DNA.

We should remember that neither DNA nanotechnology nor RNA nanotechnology is likely to yield mass-market products any time soon – nucleic acids are delicate molecules that remain enormously expensive. But these lines of research are just the sort of avenue that publically funded nanoscience should be supporting – visionary stuff that can excite both the Drexlerian radicals and the pragmatic nano-businessmen.