Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

8 thoughts on “Feel the vibrations”

  1. This is a very interesting point about biological mechanisms. I have a number of books by David Goodsell which discuss cell biology from a rather mechanistic perspective, which touch on this issue. His wording is that proteins “breathe”, they flex and open and close little holes and cracks and such. As you point out, this mechanism is crucial for enzymatic action and other kinds of protein interactions.

    Another point made by Goodsell is that the cell is somewhat limited in its toolkit by the fact that proteins can’t be made much bigger than a few hundred units. Beyond that the chance of an error is too great. So many cellular machines are built out of protein complexes. A curious point is that cells often use complexes which are made out of only one or a few proteins which self-assemble in a repetitive way to produce a larger machine. I think his explanation was that by depending on self-assembly of smaller units, any proteins with errors would not fit and would eventually be dislodged to be replaced by a correct protein. In this way the cell is able to tolerate the rather high error rate of protein synthesis and still build complex mechanisms.

    This ties back to your earlier point about cellular biology evolving to be nearly optimal for molecular processing tasks. If the cell is limited in this way, then large, precise structures not composed of repetitive units would be another space of possible designs which are unavailable to it, making optimality harder and less likely.

  2. Hal, your comments on faulty proteins being squeezed out of self-assembled arrays sounds exactly like what Neil Gershenfeld wrote about error detection needing to be inherent in the assembly equipment. http://www.edge.org/3rd_culture/gershenfeld03/gershenfeld_index.html

    I just wrote a science essay for our newsletter on this topic, concluding that although complex Avogadro-scale systems probably can’t be built with today’s engineering, simple Avogadro-scale systems can. I’d appreciate feedback from both Hal and Richard. If you leave messages on that site, I’m more likely to see them.

  3. I suspect, though I’m not certain, that the upper limit on protein size comes from the requirement to be foldable. Foldability is a very special property that is possessed only by a very small fraction of all possible protein sequences, and my suspicion is that that fraction gets much smaller as lengths get bigger. Actually there are some very big proteins in nature – much bigger than a few hundred units – but they either fold into repeated, almost independent subdomains (like the muscle protein titin) or don’t fold at all (like the gluten proteins familiar to breadmakers). I guess this supports your point that large structures in biology tend not to be precise. (Let me say in passing that I really like David Goodsell’s work).

    That article by Neil Gershenfeld is excellent, Chris. It’s an interesting thing about biology that not only is the way it is programmed hierarchical and emergent, but so is the way that errors are checked and corrected. This is a very timely topic at the moment, as it becomes clearer that many important diseases, especially of old age, are essentially the result of failures to correct errors in protein folding (e.g. Alzheimer’s, CJD, Parkinsons, Type II diabetes).

  4. It’s scarcely suprising that machines which evolved out of floppy parts, require that their parts be floppy to work. The question is, does this really imply that machines with rigid parts can’t work? After all, airplanes were inspired by birds, but don’t fly by flapping their wings.

  5. I don’t think it implies that machines that rigid parts can’t work. But it’s another difference between biology and nanotechnology. Often people argue that biology is a “proof of concept” for nanotech, that the existence of ribosomes is evidence that Drexlerian assemblers will work. Yet considerations like those described here by Richard further illustrate the tremendous difference in design philosophy and methodology between biological mechanisms and those envisioned for molecular manufacturing. Now, I don’t necessarily agree with Richard’s conclusion that the principles of biological machines must be learnt by nanotechnologists and used as guidelines; but I also don’t agree with those who would downplay these differences.

    In my opinion biology tells us very little about whether artificial nanotech systems such as those which Drexler proposes will work, either positively or negatively.

  6. The existence of ribosomes is evidence that at least some forms of mechanosynthesis work. This argument is probably more relevant for the engineered-protein flavor of molecular manufacturing (which, if you read closely, is what Drexler was mostly thinking about when he wrote Engines, and why he was ever worried about gray goo).

    Now that the discussion has shifted to diamondoid (starting with Nanosystems), and pick-and-place of silicon atoms has been demonstrated, and Freitas and Merkle have found a carbon dimer deposition reaction, and Smalley has demostrated that he doesn’t know basic facts about enzymes that have been in the literature for two decades, I don’t think there’s much need for the ribosome argument anymore.

    It’s true, biology doesn’t tell us much about whether diamondoid nanosystems (including mechanosynthetic systems) will work, or what performance they will have. As far as I know, Richard is the only person who has a thoughtful skeptical argument. He and I disagree on whether it will be feasible to engineer diamondoid surfaces that exhibit superlubricity. I might caricature his position as “If it doesn’t use Brownian motion it probably won’t work well.”

    If superlubricity (which has been demonstrated between graphite sheets) can be used in nanosystems, it seems clear that diamondoid motors running in vacuum can have power densities orders of magnitude higher than biology, with not much less efficiency. Also, deterministic digital logic looks a lot easier in diamondoid than in wet goopy stuff. I won’t claim that digital logic is efficient compared to analog, but it’s many orders of magnitude smaller and more efficient than today’s digital circuits, and a lot easier to engineer than analog.


  7. Just re-read the article. I don’t pretend to fully understand entropy, but isn’t the increase in entropy from making the ligand “softer” compensated by the decrease in (total system) entropy caused by the ligand and pocket being linked (fewer degrees of freedom)?

    As to Richard’s final point: “…[proteins] work because of [vibrations]. This is a lesson that designers of artificial nanomachines will need to learn.” Builders in steel do not need to know how rubber works.

    Let’s hash this out once and for all.
    1) Is there anything that macro-scale machines can do, and proteins can do, but stiff nanoscale systems (with maybe a few hinges) cannot do?
    2) What protein-specific functions do you think we will want but be unable to achieve in nanomachines, and why? (Note that “Uses Brownian motion, because Brownian motion is useful” is merely circular.)

    BTW, I’m still waiting for comments on my “simple redundancy” article (linked from note 2 above). I claim that you don’t need error correction at the very lowest levels (molecular assembly) even though biology clearly uses it; instead, I explain why simple engineering will be enough to deal with errors in systems with molar numbers of parts.


  8. Chris, it’s the protein molecule that’s softened on binding, not the ligand. Yes, you lose the translational and rotational degrees of freedom of the ligand when it binds, leading to a loss of entropy of 3k per molecule. But for the protein, with N atoms (where N may be many hundred), there are 3N degreees of freedom to play with, which in a classical linear system (which, of course, the protein isn’t) would give an entropy of 3Nk. So the unfreezing of only a few protein modes can compensate for the loss of translational entropy of the ligand.

    I don’t think it’s realistic to expect to hash anything out once and for all! We will only find out what is possible when someone achieves it or fails to achieve it in a serious experimental effort. The sort of thing I’m very exercised about at the moment is how to achieve direct conversion of chemical energy to mechanical energy, say to power a self-motile 1 micron size vessel, how to selectively bind and release defined chemical species, how to selectively pump chemical species against chemical potential gradients. Protein based systems are of course very good at doing all these things, the challenge is to do them synthetically either in soft or hard systems. I can just about see how to go about doing them in synthetic soft systems.

    I will make some comments about redundancy and error correction when I get a moment.

Comments are closed.