Is mechanosynthesis feasible? The debate continues.

In my post of December 16th, Is mechanosynthesis feasible? The debate moves up a gear, I published a letter from Philip Moriarty, a nanoscientist from Nottingham University, which offered a detailed critique of a scheme for achieving the first steps towards the mechanosynthesis of diamondoid nanostructures, due to Robert Freitas. The Center for Responsible Nanotechnology‘s Chris Phoenix began a correspondence with Philip responding to the critique. Chris Phoenix and Philip Moriarty have given permission for the whole correspondence to be published here. It is released in a mostly unedited form; however the originals contained some quotations from Dr K. Eric Drexler which he did not wish to be published; these have therefore been removed.

The total correspondence is long and detailed, and amounts to 56 pages in total. It’s broken up into three PDF documents:
Part I
Part II
Part III.

I’m going to refrain from adding any comment of my own for the moment, so readers can form their own judgements, though I’ll probably make some observations on the correspondence in a few days time.

The correspondence between Philip Moriarty and Chris Phoenix, for the time being, ends here. However Philip Moriarty has asked me to include this statement, which he has agreed with Robert Freitas:

“Freitas and Moriarty have recently agreed to continue discussions related to the fundamental science underlying mechanosynthesis and the experimental implementation of the process. These discussions will be carried out in a spirit of collaboration rather than as a debate and, therefore, will not be published on the web. In the event that this collaborative effort produces results that impact (either positively or negatively) on the future of mechanosynthesis, those results will be submitted for publication in a peer-reviewed journal.”

Competitive Consumption

Partisans of molecular nanotechnology keep coming back to the theme of the devastation that they say will be caused to the world’s economic systems when it becomes possible to manufacture anything at no cost. Surely, they say, when goods cost nothing to make, then the money economy must wither away? I don’t accept the premise of this argument, but even if I did I think it is based on a misunderstanding of how economics works. The laws of economics, inasmuch as anything in that discipline can be described as a law, are really observations about human nature, and as such are not likely to be overturned on the basis of a mere technological advance. The key fallacy in this way of thinking is very succinctly put in an excellent book I’ve just finished: A nation of rebels: why counterculture became consumer culture, by Joseph Heath and Andrew Potter.

This book is mainly an entertaining polemic against the counterculture and the anti-globalisation movement. What’s relevant to us here is its gleeful demolition of the idea of postscarcity economics, as proposed by Herbert Marcuse and Murray Bookchin. This is the idea that once machines were able to take care of all our material needs and wants, we would be able to form a society based not on the demands of economic production, but on fellowship and love. It’s very easy to see the connection between this and the arguments made by the proponents of molecular nanotechnology.

The key concept in understanding what’s wrong with these ideas is the notion of a “positional good”. Positional goods get their value from the fact that not everyone can have them; people pay lots of money for an expensive and rare sports car like an Aston Martin, not simply because it is a nice piece of engineering, but explicitly because possession of one signals, in the view of the purchaser, something about their exalted status in society. The whole aim of much advertising and brand building is to increase the value of artefacts which often cost very little to make, by associating them with status messages of this kind. Very few people are immune to this, unless they live in cabins in the wilderness; for most of the middle class majorities of rich countries their biggest expenditure is on a house to live in, which by virtue of the importance of location and neighbourhood is an archetypal positional good.

When one realises how important positional goods are in market economies, the fallacy of the idea that molecular manufacturing would cause the end of the money economy becomes clear. In the words of Heath and Potter:

“What eventually led to the undoing of these views was the failure to appreciate the competitive nature of our consumption and the significance of positional goods. Houses in good neighborhoods, tasteful furniture, fast cars, stylish restaurant and cool clothes are all intrinsically scarce. We cannot manufacture more of them, because their value is based on the distinction they provide to consumers. The idea of overcoming scarcity through increased production is incoherent; in our society, scarcity is a social, not a material, phenomenon.”

Molecular nanotechnology, Drexler and Nanosystems – where I stand

For the convenience of new readers of Soft Machines, here’s a quick summary of my personal positions on the question of the feasibility of the variety of nanotechnology proposed by Dr K. Eric Drexler in his book Nanosystems. Many of the arguments are made in my book Soft Machines; I’ve discussed some of these issues in my blog in the last few months, and I’ll get round to going into more detail about some of the others in the New Year.

  • Will it be possible to make functional machines and devices that operate on the level of single molecules?
    Yes. As pointed out by Drexler in his 1986 book Engines of Creation, Nature, in cell biology, gives us many examples of sophisticated machines that operate on the nanoscale to synthesise new molecules with great precision, to process information and to convert energy. We know, therefore, that radical nanotechnology (using this term to distinguish these sorts of fully functional nanoscale devices and machines from the sorts of incremental nanotechnology involved in making nanostructured materials) is possible in principle; the question is how to do it in practise.
  • Do the proposals set out in Drexler’s book Nanosystems offer the only way to achieve such a radical nanotechnology?
    Obviously not, since cell biology constitutes one radical nanotechnology that is quite different in its design principles to the scaled-down mechanical engineering that underlies Drexler’s vision of “molecular nanotechnology”, or MNT. One can imagine an artificial nanotechnology that uses some of the same operating principles and design philosophy as cell biology, but executes them in synthetic materials (as discussed in Soft Machines). Undoubtedly other approaches to radical nanotechnology that have not yet been conceived could work too. In comparing different potential approaches, we need to assess both how easy in practise it is going to be to implement them, and what their ultimate capabilities are likely to be.
  • Does Nanosystems contain obvious errors that can quickly be shown to invalidate it?
    No. It’s a carefully written book that reflects well the state of science in relevant fields at the time of writing. Drexler’s proposals for radical nanotechnology do not obviously break physical laws. There are difficulties, though, of two types. Firstly, in many cases, Drexler used the best tools available at the time of writing, and makes plausible estimates in the face of considerable uncertainty. Since then, though, nanoscale science has considerably advanced and in some places the picture needs to be revised. Secondly, many proposals in Nanosystems are not fully worked out, and many vital components and mechanisms remain at the level of “black boxes”.
  • How easy will it be to implement the vision of diamondoid-based nanotechnology outlined in Nanosystems?
    The Center for Responsible Nanotechnology writes “A fabricator within a decade is plausible – maybe even sooner”. I think this timeline would be highly implausible even if all the underlying science was under control, and all that remained was the development of the technology. But the necessary science is very far from being understood. Firstly, there are important uncertainties about the effect on the proposed mechanisms, based as they are on the scaling down of macroscopic mechanical engineering principles, of ubiquitous features of nanoscale physics such as strong surface forces and Brownian motion. This will be particularly serious for devices intended to work in ambient conditions, rather than at very low temperatures at ultra-high vacuum, and I believe that the problems this will cause are seriously underestimated by proponents of MNT. Secondly, there is currently a huge gap in the implementation pathway. Even proponents of MNT disagree on the best way to reach their goal from our current level of technology. Drexler favours soft and biomimetic approaches (see both Nanosystems, and his letter to Physics World responding to my article), though the means of moving from soft to hard systems remains unclear. Robert Freitas and Ralph Merkle favour a more direct route using diamondoid mechanosynthesis; see the ongoing discussion with Philip Moriarty here for the difficulties that this proposal may face. In conclusion, even if diamondoid-based nanotechnology does not break any physical laws in principle, I believe in practise that it will be very much more difficult to implement than its proponents think.
  • Will the advantages of the diamondoid-based nanotechnology outlined in Nanosystems be so great as to make it worth persisting to overcome these difficulties, whatever the cost?
    This depends what you want to use the technology for. Much of the emphasis from proponents of MNT is on using the technology to manufacture artefacts. But arguably the impacts of nanotechnology will be much more important and far-reaching in areas like information processing, energy storage and transduction, and medicine, where the benefits of diamond as a structural material will be much less relevant. In these areas, evolutionary nanotechnology and other approaches to radical nanotechnology, like soft nanotechnology and bio-nanotechnology, may have a greater impact on a much shorter timescale.
  • If the diamondoid-based nanotechnology proposed in Nanosystems proves to be impossible or impractical to implement, does that mean that nanotechnology will have only marginal impacts on the economy and society?
    Not necessarily. See this post –Even if Drexler is wrong, nanotechnology will have far-reaching impacts – for a discussion.
  • Is mechanosynthesis feasible? The debate moves up a gear.

    Followers of the Drexlerian flavour of radical nanotechnology often accuse nanoscientists of ignoring their approach for reasons of politics or prejudice, and take the lack of detailed critiques of books like Nanosystems as evidence that the whole Drexlerian program is feasible, and indeed imminent. Scientists, on the hand, find the Drexlerian proposals too futuristic and too lacking in practical implementation details to be even worth criticising. The result is an ever-widening gulf between the increasingly bitter Drexlerites and a dismissive and contemptuous mainstream nanoscience community, which does neither side any good. So it’s a very positive development that Robert Freitas has presented a detailed scheme for achieving the first steps towards the mechanosynthesis of diamondoid nanostructures, and even more positive that Philip Moriarty has made a detailed critique of these proposals, based on his deep practical knowledge of scanning tunneling microscopy and surface growth processes.

    Philip’s critique is contained in an 8-page letter to The Center for Responsible Nanotechnology‘s Chris Phoenix. The letter was prompted by an approach from Chris, asking Philip to expand on the criticisms of the Drexlerian vision that I reported him making at our joint appearance at the Institute of Contemporary Arts. Chris has, in turn, replied to the letter, and will be publishing the whole correspondence on the CRN web-site in due course.

    The letter covers a lot of ground; at its heart is an exploration of some fundamental problems with the Freitas scheme – just how will a diamondoid cluster grow, and is the assumption that the mechanosynthesis tool-tip will grow in the necessary pyramid shape at all realistic? The answer to this seems to be, in all probability, no.

    Just as important as this critique of one specific proposal are the general comments Philip makes about the importance of proof-of-principle experiments and of the theory-experiment feedback loop. This gets to the heart of the gulf between conventional nanoscience and the followers of Drexler. To the latter, theoretical demonstrations of feasibility in principle are primary, and considerations of how one is going to achieve the goals are secondary engineering issues that don’t need detailed consideration now. But to nanoscientists like Philip, the devil is in the details. It’s these details that determine whether a theoretically possible outcome will in practise be achieved in 10 years, in 50 years, or never. The Drexlerites tend to say “if x doesn’t work, then we’ll just try y”. But the more and more specific systems we try out and have to discard, the further away we get from the MNT dream of a system that can make any combination of atoms consistent with chemistry.

    Freitas and Merkle have taken a very positive step in addressing these issues of implementation and experimental detail. The fact that the proposals can be criticised is positive too; in science this type of criticism isn’t destructive. It’s at the heart of the process by which science moves forward.

    Update – 26th January. The whole correspondence between Moriarty and Phoenix, including the original letter, is now available for download here.

    Superconductivity in diamond

    A flurry of reports in the current edition of Physical Review Letters(Bustarret et al., Blase et al., Boeri et al., Roesch et al) build on the discovery earlier this year that boron doped diamond is a superconductor. I’m surprised we haven’t heard more of this from MNT enthusiasts, who are usually fascinated by anything to do with diamond. The transition temperature, admittedly, is a very chilly 4 Kelvin.

    Exploiting evolution for nanotechnology

    In my August Physics World article, The future of nanotechnology, I argued that fears of the loss of control of self-replicating nanobots – resulting in a plague of grey goo – were unrealistic, because it was unlikely that we would be able to “out-engineer evolution”. This provoked this interesting response from a reader, reproduced here with his permission:

    Dr. Jones,
    I am a graduate student at MIT writing an article about the work of Angela Belcher, a professor here who is coaxing viruses to assemble transistors. I read your article in Physics World, and thought the way you stated the issue as a question of whether we can “out-engineer evolution” clarified current debates about the dangers of nanotechnology. In fact, the article I am writing frames the debate in your terms.

    I was wondering whether Belcher’s work might change the debate somewhat. She actually combines evolution and engineering. She directs the evolution of peptides, starting with a peptide library, until she obtains peptides that cling to semiconductor materials or gold. Then she genetically engineers the viruses to express these peptides so that, when exposed to semiconductor precursors, they coat themselves with semiconductor material, forming a single crystal around a long, cylindrical capsid. She also has peptides expressed at the ends that attach to gold electrodes. The combination of the semiconducting wire and electrodes forms a transistor.

    Now her viruses are clearly not dangerous. They require a host to replicate, and they can’t replicate once they’ve been exposed to the semiconducting materials or electrodes. They cannot lead to “gray goo.”

    Does her method, however, suggest the possibility that we can produce things we could never engineer? Might this lead to molecular machines that could actually compete in the environment?

    Any help you could provide in my thinking through this will be appreciated.

    Thank you,

    Kevin Bullis

    Here’s my reply:
    Dear Kevin,
    You raise an interesting point. I’m familiar with Angela Belcher’s work, which is extremely elegant and important. I touch a little bit on this approach, in which evolution is used in a synthetic setting as a design tool, in my book “Soft Machines”. At the molecular level the use of some kind of evolutionary approach, whether executed at a physical level, as in Belcher’s work, or in computer simulation, seems to me to be unavoidable if we’re going to be able to exploit phenomena like self-assembly to the full.

    But I still don’t think it fundamentally changes the terms of the debate. I think there are two separate issues:

    1. is cell biology close to optimally engineered for the environment of the (warm, wet) nanoworld?

    2. how can we best use design principles learnt from biology to make useful synthetic nanostructures and devices?

    In this context, evolution is an immensely powerful design method, and it’s in keeping with the second point that we need to learn to use it. But even though using it might help us approach biological levels of optimality, one can still argue that it won’t help us surpass it.

    Another important point revolves around the question of what is being optimised, or in Darwinian terms, what constitutes “fitness”. In our own nano-engineering, we have the ability to specify what is being optimised, that is, what constitutes “fitness”. In Belcher’s work, for example, the “fittest” species might be the one that binds most strongly to a particular semiconductor surface. This is quite different as a measure of fitness than the ability to compete with bacteria in the environment, and what is optimal for our own engineering purposes is unlikely to be optimal for the task of competing in the environment.

    Best wishes,
    Richard

    To which Kevin responded:

    Richard,
    It does seem likely that engineering fitness would not lead to environmental fitness. Belcher’s viruses, for example, would seem to have
    a hard time in the real world, especially once coated in a semiconductor crystal. What if, however, someone made environmental fitness a goal? This does not seem unimaginable. Here at MIT engineers have designed sensors for the military that provide real-time data about the environment. Perhaps someday the military will want devices that can survive and multiply. (The military is always good for a scare. Where would science fiction be without thoughtless generals?)

    This leads to the question of whether cells have an optimal design, one that can’t be beat. It may be that such military sensors will not be able to compete. Belcher’s early work had to do with abalone, which evolved a way to transform chalk into a protective lining of nacre. Its access to chalk made an adaptation possible that, presumably, gave it a competitive advantage. Might exposure to novel environments give organisms new tools for competing? I think now also of invasive species overwhelming existing ones. These examples, I realize, do not approach gray goo. As far as I know we’ve nothing to fear from abalone. Might they suggest, however, that
    novel cellular mechanisms or materials could be more efficient?

    Kevin

    To which I replied:
    Kevin,
    It’s an important step forward to say that this isn’t going to happen by accident, but as you say, this does leave the possibility of someone doing it on purpose (careless generals, mad scientists…). I don’t think one can rule this out, but I think our experience says that for every environment we’ve found on earth (from what we think of as benign, e.g. temperate climates on the earth’s surface, to ones that we think of as very hostile, e.g. hot springs and undersea volcanic vents) there’s some organism that seems very well suited for it (and which doesn’t work so well elsewhere). Does this mean that such lifeforms are always absolutely optimal? A difficult question. But moving back towards practicality, we are so far from understanding how life works at the mechanistic level that would be needed to build a substitute from scratch, that this is a remote question. It’s certainly much less frightening than the very real possibility of danger from modifying existing life-forms, for example by increasing the virulence of pathogens.

    Best wishes,
    Richard

    Molecular devices and machines

    This month’s edition of Physics World (this is the monthly magazine of the UK’s Institute of Physics) has an interesting article on molecular devices and machines. The article, (unfortunately only available in full to subscribers), is by Vincenzo Balzani and coworkers from the University of Bologna, and is a nice overview of the supramolecular chemistry approach to making nanomachines at a single molecule level. This is the group that published, in collaboration with Fraser Stoddart from UCLA, a report earlier this year about a molecular elevator.

    One minor notable point about this article (and this is apparent from the teaser that non-subscribers can read from the link above): it has the warmest remarks about Drexler that I have seen in an article directed at scientists for many years.

    The future of nanotechnology; Drexler and Jones exchange letters

    The current edition of “Physics World” carries a letter from K. Eric Drexler, written in response to my article in the August edition, “The future of nanotechnology“. There is also a response from me to Drexler’s letter. Since the letters section of Physics World is not published online, I reproduce the letters here. The text here is as the authors wrote them; they’ve been lightly edited to conform with Physics World house style in the printed version.

    From Dr K.Eric Drexler to Physics World.

    I applaud Physics World for drawing attention to the emerging field of artificial molecular machine systems. Their enormous productive potential is illustrated in biology and nature, where we observe molecular machine systems constructing molecular machinery, electronics, and digital information storage systems at rates measured in billions of tons per year. To understand the future potential of fabrication technologies (the foundation of all physical technology) we must examine the productive potential of artificial molecular machine systems. This field of enquiry has been a focus of my research since (Drexler 1981), which explored directions first suggested by (Feynman 1959).

    I was surprised to find that Professor Richard Jones, in describing ���flaws in Drexler���s vision,��� ignores my physical analysis of productive molecular machine systems. He instead criticizes the implied hydrodynamics of an artist���s fantastic conception of a ���nanosubmarine��� ��� a conception not based on my work. It is, I think, important that scientific criticisms address the scientific literature, not artistic fantasies.

    Professor Jones then offers a discussion of nanoscale surface forces, thermal motion, and friction that could easily leave readers with the impression that these present dire problems, which he implies have been ignored. But ignoring surface forces or thermal motion in molecular engineering would be like ignoring gravity or air in aeronautics, and physical analysis shows that well-designed molecular bearing interfaces can have friction coefficients far lower than those in conventional machines. These issues (and many others) are analyzed in depth, using the methods of applied physics, in Chapters 3, 5, and 10 of my book Nanosystems (Drexler 1992). Professor Jones fails to cite this work, noting instead only my earlier, popular book written for a general audience.

    I agree with Professor Jones regarding the importance of molecular machine systems and the value of learning from and imitating biological systems at this stage of the development of our field. Where we part company is in our judgment of the future potential of the field, of the feasibility of molecular machine systems that are as far from the biological model as a jet aircraft is from a bird, or a telescope is from an eye. I invite your readers to examine the physical analysis that supports this understanding of non-biological productive molecular machine systems, and to disregard the myths that have sprung up around it. (One persistent myth bizarrely equates productive molecular machines with gooey nanomonsters, and then declares these to be impossible contraptions that grab and juggle atoms using fat, sticky fingers.)

    There are many interesting research questions to address and technology thresholds to cross before we arrive at advanced artificial molecular machine systems. The sooner we focus on the real physics and engineering issues, building on the published literature, the sooner progress can be made. To focus on artist���s conceptions and myths does a disservice to the community.

    K. Eric Drexler, PhD
    Molecular Engineering Research Institute

    K E Drexler 1981 Molecular Engineering: An approach to the development of general capabilities for molecular manipulation. Proc. Nat. Acad. Sci. (USA) 78:5275���5278
    K E Drexler 1992 Nanosystems: Molecular Machinery, Manufacturing, and Computation (New York Wiley/Interscience)
    R Feynman 1959 There���s Plenty of Room at the Bottom, in D Gilbert (ed) 1961 Miniaturization (New York Reinhold)

    From Dr Richard A.L. Jones to Physics World.

    I am pleased that Dr Drexler finds so much to agree with in my article. Our goals are the same; our research aims to understand how to make molecular scale machines and devices. Where we differ is how best to achieve that goal. The article was necessarily very brief in its discussion of surface forces, friction and thermal motion, and my book [1] contains a much fuller discussion, which does explicitly refer to Drexler���s book ���Nanosystems���. No-one who has read ���Nanosystems��� could imagine that Drexler is unaware of these problems, and it was not my intention in the article to imply that he was. Absurd images like the nanosubmarine illustration I used are widely circulated in popular writings about Drexlerian nanotechnology; they well illustrate the point that na?�ve extrapolations of macro-scale engineering to the nanoscale won���t work, but I���m happy to agree that Drexler���s own views are considerably more sophisticated than this. The point I was making was that the approach Drexler describes in detail in Nanosystems, (which he himself describes in the words: ���molecular manufacturing applies the principles of mechanical engineering to chemistry���), works within a paradigm established in macroscopic engineering and seeks to find ways to engineer around the special features of the nanoworld. In contrast to this, the design principles adopted by cell biology turn these special features to advantage and actively exploit them, using concepts such as self-assembly and molecular shape change that have no analogue in macroscopic engineering. Again, Dr Drexler and I are in agreement that in the short term biomimetic nanotechnology will be very fruitful and should be strongly pursued. We differ about the likely long term trajectory of the technology, but here, experiment will decide. Such is the unpredictable nature of the development of technology that I rather suspect that the outcome will surprise us both.

    [1] Soft Machines, R.A.L. Jones, OUP (2004)

    Feel the vibrations

    The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

    It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

    Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

    It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

    A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

    There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

    The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

    Did Smalley deliver a killer blow to Drexlerian MNT?

    The most high profile opponent of Drexlerian nanotechnology (MNT) is certainly Richard Smalley; he’s a brilliant chemist who commands a great deal of attention because of his Nobel prize, and his polemics are certainly entertainingly written. He has a handy way with a soundbite, too, and his phrases “fat fingers” and sticky fingers” have become a shorthand expression of the scientific case against MNT. On the other hand, as I discussed below in the context of the Betterhumans article, I don’t think that the now-famous exchange between Smalley and Drexler delivered the killer blow against MNT that sceptics were hoping for.

    For my part, I am one of those sceptics; I’m convinced that the MNT project as laid out in Nanosystems will be very much more difficult than many of its supporters think, and that other approaches will be more fruitful. The argument for this is covered in my book Soft Machines. But, on the other hand, I’m not convinced that a central part of Smalley’s argument is actually correct. In fact, Smalleys line of reasoning if taken to its conclusion would imply not only that MNT was impossible, but that conventional chemistry is impossible too.

    The key concept is the idea of an energy hypersurface embedded in a many-dimensional hyperspace, the dimensions corresponding to the degrees of freedom of the participating atoms in the reaction. Smalley argues that this space is so vast that it would be impossible for a robot arm or arms to guide the reaction along the correct path from reactants to products. This seems plausible enough on first sight until one pauses to ask, what in an ordinary chemical reaction guides the system through this complex space? The fact that ordinary chemistry works one can put a collection of reactants in a flask, apply some heat, and remove the key products (hopefully this will be your desired product in a respectable yield, with maybe some unwanted products of side-reactions as well) tells us that in many cases the topography of the hypersurface is actually rather simple. The initial state of the reaction corresponds to a deep free energy minimum, the product of each reaction corresponds to another, similarly deep minimum, and connecting these two wells is a valley; this leads over a saddle-point, like a mountain pass, that defines the transition state. A few side-valleys correspond to the side-reactions. Given this simple topography, the system doesnt need a guide to find its way through the landscape; it is strongly constrained to take the valley route over the mountain pass, with the probability of it taking an excursion to climb a nearby mountain being negligible. This insight is the fundamental justification of the basic theory of reaction kinetics that every undergraduate chemist learns. Elementary textbooks feature graphs with energy on one axis, and a reaction coordinate along the other; the graph shows a low energy starting point, a low energy finishing point, and an energy barrier in between. This plot encapsulates the implicit, and almost always correct, assumption that out of all the myriad of possible paths the system could take through the hyperspace of configuration space the only one that matters is the easy way, along the valley and over the pass.

    So if in ordinary chemistry the system can navigate its own way through hyperspace, whats different in the world of Drexlerian mechanochemistry? Constraining the system by having the reaction take place on a surface and spatially localising one of the reactants will simplify the structure of the hyperspace by reducing the number of degrees of freedom. This makes life easier, not harder surfaces of any kind generally have a strong tendency to have a catalytic effect but nonetheless, the same basic considerations apply. Given a sensible starting point and a sensible desired product (i.e. one defined by a free energy minimum) chemistry teaches us that it is quite reasonable to hope for a topographically straightforward path through the energy landscape. As Drexler says, if the pathway isnt straightforward you need to choose different conditions or different targets. You dont need an impossible number of fingers to guide the system through configuration space for the same reason that you dont need fingers in conventional chemistry, the structure of configuration space itself guides the way the system searches it.

    This is a technical and rather abstract argument. As always, the real test is experimental. There’s some powerful food for thought in the report on a Royal Society Discussion Meeting “‘Organizing atoms: manipulation of matter on the sub-10 nm scale'” which was published in the June 15 issue of Philosophical Transactions. Perhaps the most impressive example of a chemical reaction induced by physically moving individual reactants into place with an STM is the synthesis of biphenyl from two iodobenzene molecules (Hla et al, PRL 85 2777 (2001)). To use their concluding words “In conclusion, we have demonstrated that by employing the STM tip as an engineering tool on the atomic scale all
    steps of a chemical reaction can be induced: Chemical reactants can be prepared, brought together mechanically, and finally welded together chemically. ” Two caveats need to be added: firstly, the work was done at very low temperature (20 K) presumably so the molecules didn’t run around too much as a result of Brownian motion. Secondly, the reaction wasn’t induced simply by putting fragments together into physical proximity; the chemical state of the reactants had to be manipulated by the injection and withdrawal of electrons from the STM tip.

    Nonetheless, I rather suspect that this is exactly the sort of reaction that one would say wasn’t possible on the basis of Smalley’s argument.

    (Links in this post probably need subscriptions).