David Forrest, who provided one of the pro-molecular nanotechnology voices at the Nottingham nanotechnology debate back in June, has posted some further reflections on the issues on his website. I’ll comment on these issues more soon.
Meanwhile, for those who weren’t able to get to the debate, I believe the film of the event is still being edited for web release, and the text is currently being transcribed, and will be published in the journal Nanotechnology perceptions. There’ll be more information here when I get it.
9 thoughts on “Debating the nanotechnology debate”
No, I don’t think that the Drexler camp has adequetly addressed the issue of brownian motion. It is because of brownian motion that I think a eutactic nanotechnology is not possible.
From my understanding, these mechanical nanotechnology systems that have been demonstrated in the lab work at cryogenic temperatures, to avoid the brownian motion that they say is not a problem.
If they can synthesize and operate this stuff at room temperature, I will be alot more impressed.
As an interested onlooker, I hope this spurs all sides onto actually making and testing prototypes etc, since that is the only way we will settle some of the arguments.
I started reading David Forrest’s website and I am not impressed by his counterpoints (to say the least). I hope you never feel alone when surrounded by a mob of angry Drexlarians. Don’t worry, the real scientists agree with you. 😉
I’ll just cite a couple of his counterpoints and lightly mock them.
My favorite is
“With existing design tools, it is harder to design machines with the irregular surface conformations and electrostatic potential distributions found in most biological protein molecules than to design machines based on the regular geometric features of diamondoid materials.”
This is so wrong. It is amazingly easy to cut and paste DNA. Consequently, because biological organisms do all of the information processing and synthesis of proteins for us, we can create any protein we want in the lab and, if we want, manufacture it in kilograms+ quantities. Think about that. You name the protein. Any protein. Any sequence. It can be made (modulo folding, function). And fast/cheap DNA synthesis will make the process even faster than current cutting/pasting techniques that use restriction enzymes. Incidentally, restriction enzymes (enzymes that cut DNA only at specific sequences) are yet another tool we get from biology for free.
Now, compare this to making a general organic molecule. I know nothing about diamondoid chemistry, but I do a little about polymer chemistry. Imagine trying to polymerize a molecule containing a small number of different monomers. Sure, one monomer is easy (polyethylene, polyurethane, etc), but what about two? Three? Four? (Remember, natural proteins have 20! monomers.) You need some fancy swapping of functional groups to get only one molecule of a monomer to polymerize while making sure more than one do not. And you need a way to switch back and forth between monomers, which might require cycles of rinsing, washing, and resolvating in new monomer reagents (not cheap for large-scale manufacturing!).
So, overall, protein construction and manufacture is much, much cheaper and easier than exotic organic chemistry synthesis.
In addition, because biology is at work, we get the all the benefits of evolution to help us design the protein to work better. We can either force the organism to naturally evolve the protein to work better or we can use artificial ‘directed-evolution’ (a molecular biology technique) to do a sort of optimization of the protein sequence towards a measurable, desired function. No such processes occur in mechanical systems.
Just one more Forrest comment: “As early as 1981, Drexler noted that biological structures can function as machine components, and proposed tailoring protein molecules to serve as more effective molecular machine components.”
I love this. Forrest says, “You’re wrong!”. And then, right after, “But Drexler thought of this first anyways.” Heh, no offense, but the idea of engineering proteins to perform useful functions is a lot older than 1981.
I’ll end my rant with a final Forrest comment about what is needed to prevent Brownian motion: “the eutactic environment which is sealed from contaminants, and in which the trajectory of every atom is controlled.”
Unfortunately, no such environment exists at positive temperatures. If the atom’s degrees of freedom must be controlled, then that includes its translational, rotational, and vibrational ones. How exactly one stops an atom from vibrating without lowering the temperature to near zero is beyond me.
Keep up the good work, Richard. If progress is going to be made, it will be done by real scientists who not only hypothesize, but also experiment and objectively observe.
Correction: Atoms don’t vibrate. Molecules do. And my advisor teaches stat mech… Erp.
Kurt, Howard – I think the Brownian motion/thermal noise issue is an important one. Nanosystems doesn’t neglect it, but it tells you how to calculate it, without really assessing its impact on a system of any kind of complexity. The Merkle paper David references is a bit disingenuous on this. He quite correctly says that the RMS thermal displacement goes as root(kT/stiffness), and simply states that if you make the stiffness big enough the problem goes away. This brushes under the carpet the fact that the relevant stiffness isn’t a material parameter – it has a size dependence. In fact this stiffness must scale as a modulus times a size. The situation is worse because the relevant parameter isn’t the absolute value of the RMS displacement; it must be the ratio of thermal displacement to overall size. You could call this the “thermal tolerance”. This then turns out to scale as (size)^-3/2, which even with diamond is going to be causing problems at the nanoscale.
Howard, as you say thermal noise exists no matter how good the vacuum is – any system in thermal equilibrium by any mechanism must undergo this kind of Brownian motion (to be fair to David, he’s probably not familiar with the generalised usage of the term Brownian motion to include all kinds of thermal fluctuations, not just those caused by collisions with gas molecules).
Finally, thanks for your solidarity in the face of those angry mobs! In fairness, though, I should say that I got to meet pretty well all of the high profile Drexler supporters at the Foresight meeting (not to mention the man himself), and they were all very civil (well, maybe with one exception). The debate clearly hasn’t caused too many personal bad feelings; I had a brief but entirely cordial word with David Forrest, and shared a bottle of very fine old Australian shiraz with Josh Hall.
Two points. First, with regard to Howard Salis’ critique of David Forrest’s comments about diamondoid vs biological systems. Howard misreads what David is saying. David is not saying that it is easier to build diamondoid today. Of course Howard is right that we have many tools for custom-creating biological systems and essentially none for creating diamondoid. What David actually said is that it is easier to DESIGN diamondoid systems. Designing them means creating an abstract description of the desired system which should have the appropriate functionality. It is quite difficult to do this with biological molecules because of their softness, irregular shapes, and dependence on the cumulative effects of many weak forces to perform their functions. Hypothetical diamondoid devices are rigid and are based on strong covalent bonds. This makes them much easier to design with on paper.
Second, with regard to Drexler’s supposed failure to recognize the importance of thermal motions. He does go into this in Nanosystems. For example, figure 8.2 shows ways to construct a reactive tip for mechanosynthesis so that it has a high degree of lateral stiffness and therefore immunity to misreactions due to thermal motions. The unsupported tip has a stiffness of 6 N/m while the other examples increase stiffness up to 65 N/m. Section 8.5.5a describes a specific example where a mechanosynthetic reaction might go wrong due to thermal motion. Thermal vibration could cause an active tip to react at the wrong point in a structure being constructed. Drexler argues that typical designs should be able to separate possible misreaction sites by at least one bond length or about 0.1 nm. At room temperature this requires a stiffness of the reactive tip of 12 N/m in order to reduce the misreaction rate to below one in 10^15. Another strategy he proposes is simply approaching the reactive site at a position slightly offset from the desired point, in the direction opposite to the closest misreaction point. This can often reduce the necessary stiffness to 6 N/m. These stiffness values are well within a reasonable range for diamonoid support structures.
Hal, two very quick points about Drexler’s calculation in Nanosystems. Firstly, it’s obviously necessary, in order to be able to do atomically precise mechanosynthesis, to have positional accuracy to within one atom, but this is not sufficient to do all the much more complex things envisaged in MNT. Take, for example, mechanical computing. Recall that Babbage’s difference engine was close to being unbuildable in the 1830’s due to the inability to do mechanical engineering to the required precision. But a relative precision of a few percent – corresponding to one atom in a hundred or so – is very poor by the standards even of instrument makers in the 19th century. I think it is still correct to say that the effects of thermal noise have not been considered in any system with any degree of complexity – for example a machine including interlocking parts, cogs and gears and the like.
The design issue is an interesting one. I think a correct statement of the problem is that we don’t know how to design at the nanoscale in either paradigm. The problem is that we think we know how to design in the “hard” paradigm simply because we assume that we can carry over ways of thinking from mechanical engineering. But I think this is an unverified assumption that is open to severe question. We assume that simple molecular modelling is sufficient to design chemically stable structures, but since we can’t do density functional theory in our heads this really is just an assumption. We assume that because C-C is the strongest bond we know about, we can assume diamondoid will behave rigidly on small scales. But, again, the thermal noise calculations show us this is not so – it’s as stiff as we can get, but is it stiff enough for a mechanical engineering mindset not to cause us trouble? It’s not obvious that it is. On the other hand, we know we don’t know how to design like biology does – and this was the point of my recent post about complexity. But there are a lot of smart people working on learning those rules.
I’ll be writing more about these issues when my next proposal deadline has passed!
To me, at least, the ability to design something rests firmly on the ability to construct the design and test it. Otherwise, how do you know it will work? How do you know your assumptions are valid and that your methods are sound?
We already know something about how proteins have evolved. We know how catalytic sites are arranged and how protein-protein interaction surfaces are conserved. But that information was gained through experimentation and we can now use it to design proteins.
About stiffness and diamondoid: Have you explored the fracture mechanics of these tips? How much force can you apply before it fractures? The material can be very hard, but contain a single tiny imperfection, leading to a growing dislocation. Every material has these imperfections. Has this been tested using other materials?
Howard now that Hal has explained David Forrest’s discussion to you. Do you accept that your “light mocking” was misplaced ? David was talking about the design issues with the protein approach. Richard seems to agree that it could appear that the designing using hard approach looks like a more tractable problem. Richard is saying that he has doubts about assumptions that around designing for the hard approach.
It is not claimed that Drexler thought up using engineering proteins. It is claimed that he was an early proponent of using proteins for a protein-centric approach to get to the goal of molecular nanotechnology (Since he did early pioneering thinking on molecular nanotechnology … it is just a fact that he can rightly claim a lot of firsts). For those who have spoken with him for years he has indicated that he felt this was the most likely successful path. My understanding of what he said was the pathway would lead to the creation of an artificial ribosome and then onto the other mechanical systems.
It seems that instead of pausing and indicating that your earlier mocking was misplaced…you have chosen to ignore that you were mistaken and moved onto a new criticism. This gives the perception that you want to lob critiques one after another instead of having a calm discussion. You accuse Drexlerians as being an angry mob and yet the tone you are presenting is more of mob mentality.
I am for technology advancement and using those capabilities to improve our civilization. If Richard Jones can bring benefits and advances from soft machines then great. If Ralph Merkle and Robert Freitas or Zyvex can make their approaches work then that is good too. Just like for computers if Intel, AMD, Apple, IBM, HP, Sun microsystems can make better hardware then great whether it is with silicon or molecular electronics or optical processors … all I want is faster, more capable and cheaper systems. Talking down or trying to politically cut off funding for other ideas and approaches seems counter-productive to the most efficient advancement of technology. The political cut off of funding is more related to NNI politics and not related to the healthy promotion of alternative points of view in science that Richard Jones is performing.
Comments are closed.