The mechanosynthesis debate

Now that the 130 or so people who have downloaded the Moriarty/Phoenix debate about mechanosynthesis so far have had a chance to digest it, here are some thoughts that occur to me on re-reading it. This is certainly not a summary of the debate; rather these are a just a few of the many issues that emerge that I think are important.

On definitions. A lot of the debate revolves around questions of how one actually defines mechanosynthesis. There’s an important general point here – one can have both narrow definitions and broad definitions, and there’s a clear role for both. For example, I have been very concerned in everything I write to distinguish between the broad concept of radical nanotechnology, and the specific realisation of a radical nanotechnology that is proposed by Drexler in Nanosystems. But we need to be careful not to imagine that a finding that supports a broad concept necessarily also validates the narrower definition. So, to use an example that I think is very important, the existence of cell biology is compelling evidence that a radical nanotechnology is possible, but it doesn’t provide any evidence that the Drexlerian version of the vision is workable. Philip’s insistence on a precise definition of mechanosynthesis in distinction from the wider class of single molecule manipulation experiments stems from his strongly held view that the latter don’t yet provide enough evidence for the former. Chris, on the other hand, is in favour of broader definitions, on the grounds that if the narrowly defined approach doesn’t work, then one can try something else. This is fair enough if one is prepared to be led to wherever the experiments take you, but I don’t think it’s consistent with having a very closely defined goal like the Nanosystems vision of diamondoid based MNT. If you let the science dictate where you go (and I don’t think you have any choice but to do this), your path will probably take you somewhere interesting and useful, but it’s probably not going to be the destination you set out towards.

On the need for low-level detail. The debate makes very clear the distinction between the high-level systems approach exemplified by Nanosystems and by the Phoenix nanofactory paper, and the need to work out the details at the “machine language” level. “Black-boxing” the low-level complications can only take you so far; at some point one needs to work out what the elementary “machine language” operations are going to be, or even whether they are possible at all. Moreover, the nature of these elementary operations can’t always be divorced from the higher level architecture. A good example comes from the operation of genetics, where the details of the interaction between DNA, RNA and proteins means that the distinction between hardware and software that we are used to can’t be sustained.

On the role of background knowledge from nanoscience. A widely held view in the MNT community is that very little research has been done in pursuit of the Drexlerian project since the publication of Nanosystems. This is certainly true in the sense that science funding bodies haven’t supported an overtly Drexlerian research project; but it neglects the huge amount of work in nanoscience that has a direct bearing, in detail, on the proposals in Nanosystems and related work. This varies from the centrally relevant work done by groups (including the Nottingham group, and a number of other groups around the world) which are actively developing the manipulation of single molecules by scanning probe techniques, to the important background knowledge accumulated by very many groups round the world in areas such as surface and cluster physics and chemical vapour deposition. This (predominantly experimental) work has greatly clarified how the world at the nanoscale works, and it should go without saying that theoretical proposals that aren’t consistent with the understanding gained in this enterprise aren’t worth pursuing. Commentators from the MNT community are scornful, with some justification, of nanoscientists who make pronouncements about the viability of the Drexlerian vision of nanotechnology without having acquainted themselves with the relevant literature, for example by reading Nanosystems. But this obligation to read the literature goes both ways.

I think the debate has moved us further forward. I think it is clear that the Freitas proposal that sparked the discussion off does have serious problems that will probably prevent its implementation in its original form. But the fact that a proposal concrete enough to sustain this detailed level of criticism has been presented is itself immensely valuable and positive, and it will be interesting to see what emerges when the proposal is refined and further scrutinised. It is also clear that, whatever the ultimate viability of this mechanosynthetic route to full MNT turns out to be (and I see no grounds to revise my sceptical position), there’s a lot of serious science to be done, and claims of a very short timeline to MNT are simply not credible.

25 thoughts on “The mechanosynthesis debate”

  1. I thought an interesting comment from Philip was that he does not think it will be possible within 5 years to do a simple mechanosynthetic process consisting of a hydrogen abstraction followed by a carbon deposition, and then repeat that several times. Just finding the spot again will be hard with today’s technology, even if you could do one of the reactions.

    There’s been some discussion of an X Prize for nanotech, some kind of reward for solving a hard problem. Maybe this would be a good example. With a million bucks on the line I’ll bet we’d see more creative thinking along the lines of what Freitas attempted. Maybe that one didn’t work, but if enough other people were working on the problem it might well be solvable. This could be a good milestone.

  2. Speaking of definitions, which one are you using when you say “whatever the ultimate viability of this mechanosynthetic route to full MNT turns out to be” with regards to ‘full MNT’? Is this the ‘universal assembler’ concept? Hydrogenated diamondoid/buckytube/graphene mechanochemical capabilities?

    In short, I agree there’s some concern regarding what the definitions are in the debate. *wry grin* I’d be strongly against trying to limit the debate down to a single term or even set of definitions – things are still far too preliminary IMO – but I would suggest a little more care with which terms are defined & used in the debate. I don’t mean to point out just you, Dr Jones – but throughout the “nanotech revolution”. (which seems to include a lot of people finding interesting new ways to use the nanotech buzzword to sell preexisting tech)

    -John
    (Gawd I get wordy – sorry ’bout that)

  3. ON TIMELINES

    CRN has said that you could have a diamond nano factory as soon as 2010 and will have them by 2025. If diamond nano-factories are possible, I don’t see don’t see the 2020 Р2025 as unreasonable.

    But for a moment lets assume that diamond nano-factories ( and their cousins graphite nano-facories) have too many practical / implementation problems, in 2010 -2025 time frame could we expect to see some kind of soft or bio-nanotech combined with 3-D fabricators? Or some sort of general purpose exponential manufacturing system? I think that the answers are; Yes, almost certainly 3D fabricators will use what Dr. Jones calls evolutionary nanotech. Already, scientist have combined culturing of skin cells with ink jetting technology to make a precisely shaped “artificial” skin that can be used for burn victims. There is a very high probability that some kind of exponential manufacturing system will be developed. Tihamer Toth-Fejel, Matt Moses and the ever busy Robert Freitas have been doing some real ground breaking work on Kinematic Cellular Automata that can self replicate when provided with simple purely structural parts, energy, and the right programing. No mechanochemisty needed.

  4. Hal: would a million dollars help? I don’t know. What I do think needs to be stressed is that this kind of work, pushing what can be done with single molecule manipulation, is already going on in a number of places around the world, and it is getting support. Have another look, for example, at my post about the Sheffield/Leeds nanorobotics project, that Philip Moriarty is involved in, and which is just getting going with $4.4 million funding. The press release is written in a rather non-sensational way, but I’m sure you can see the ambition of what they are trying to do. Similar work is going on in many other places elsewhere in Europe and in the Far East. Maybe even in a few places in the USA…

    John: hoist by my own definitional petard already! When I say MNT, I mean the diamondoid-based radical nanotechnology outlined in Nanosystems. The way I use definitions of all the different things that get called nanotechnology is outlined, for example, in this post.

    Jim: I agree with you both that over the next 20 years we should expect to see some exciting soft nanotechnology. I also agree that ink-jet printing is a very neat technology that’s soon going to be expanded into many other areas (though it’s going to be hard for it to make features below 10 microns or so). For example here in the UK, in addition to the example you cite, Cambridge Display Technology already has a working pilot plant for ink-jetting polymer light emitting displays. I’m not so sure about exponential manufacturing, though – I need to be convinced that there’s going to be some application of it that’s economically competitive with incremental improvements on existing manufacturing methods (this argument does not of course apply to applications in space, which is why, I guess, its funding comes from NASA).

  5. There is an article on Rasmussen at Los Alamos, who proposes an entirely sythetic biology using something called PNA as the information-carrying molecule. PNA is like DNA except that the sugar molecules are replaced with amino-acids. According to the article, there are over 100 labs working on “synthetic” biology (or artificial life). This kind of biomemetic approach seems to be the sensible way to develop a comprehensive “soft” or wet nanotechnology capability over the next 20 years or so.

    It seems clear to me that the biomemetic approach is the way to go. I suspect that several variants of wet nanotechnology based on such synthetic biology will be developed in the next 20 years or so.

  6. On scanning probe capabilities:
    1) At one point I claimed that AFMs could image without touching the surface, and called this “tapping mode.” It’s true that tapping mode doesn’t work that way. But this was a simple terminological mistake; it’s called “non-contact mode.” It’s a shame this simple mistake wasn’t simply corrected.

    2) Since the debate, I’ve run across a company that is publishing papers about the development of a two-tip AFM system in which each tip has six degrees of freedom and the tips can be repeatedly touched together. See Xidex
    and this PDF of theirs for more information.

    I didn’t realize during the debate that Philip’s goal was to make me give ground, rather than to explore what was possible. So I suspect now that I gave ground too easily, and I remain unsure of whether the outlook for Freitas’s proposal (and variants thereof) is as bleak as the debate makes it look.

    Chris

  7. Chris –

    The system to which you refer is very interesting but: (a) Have a look at http://www.softmachines.org/wordpress/index.php?p=70#comment-909 for a brief discussion of dual probe systems in a response I posted to John B’s comments; (b) the system you describe doesn’t have closed loop control, and (c) the problems with regard to tip radius of curvature that Hal Finney and I have discussed recently (see http://www.softmachines.org/wordpress/index.php?p=70#comment-916) remain. The .pdf document to which you refer states that tips with a radius of curvature of 10 nm and a cone angle of 10 degrees have been used.

    Rob Freitas and I will discuss (via closed e-mail correspondence) the science underlying his (and other proposals) for mechanosynthesis. See the quotation at the end of Richard’s post above (“Is mechanosynthesis feasible? The debate continues.”).

    Philip

  8. Richard,
    I think that there are a couple of ways to get around the 10 micron detail limit for ink jetting,
    1.) you can jet out objects that have a much finer level of detail. For example; the skin cells, or pigment crystals (size 50 – 200 nm).
    2.) Potentially, if you started with a pre patterned substrate you could jet out a drop that had dissolved reactants, then evaporate the solvent. Then repeat the process with different reactants. You may be able to build complex polymers at specific locations on the substrate.

    Chris and Philip,

    Thanks for the links to the duel probe microscopes, very neat developments.

    I would like to point out that Xidex is planning on growing a carbon nanotube at the end of the probe tip. That should get the point down to ~ a nanometer.

    Getting back to the diamond building proposal, if you redesigned the “top” of dimer deposition tool to form covalent bonds with the tip of the carbon nanotube, you would not have the problem of growing the “handle” by carbon vapor deposition. ( the problem I see with this approach is: it makes the synthesis of the dimer deposition tool more difficult, maybe much, much, more difficult.)

  9. Jim,

    On carbon nanotube tips: as mentioned in my correspondence with Hal Finney, even with a 1 nm radius of curvature there are issues of steric hindrance for some of the reactions described in “Nanosystems” (alternative take on the ‘fat fingers’ argument).

    “If you redesigned the ‚Äútop‚Äù of dimer deposition tool to form covalent bonds with the tip of the carbon nanotube” – we’re again trying to engineer ‘around’ the fundamental physics and chemistry that dictate the bonding geometry at the end of the tip (the nanotube). I’m of the opinion that you’ll have rather a narrow parameter space to work with in terms of the bond energetics at the end of the tube but, having said that, I’m not entirely up-to-date on the literature regarding nanotube functionalisation. I have a meeting with an expert in nanotube chemistry later today and I’ll get his views on the viability of functionalising the end of the tube to attempt mechanochemistry. It’s certainly an interesting idea.

    Note also that the dual probe systems described in previous posts do not work in ultrahigh vacuum (to the best of my knowledge) and thus don’t have the correct operating environment to implement mechanosynthesis a la Freitas et al.s proposals.

    Best wishes,

    Philip

  10. Jim, I think you’re right on both counts about ink-jetting. For example, this article in Science, from Sirringhaus’s group in Cambridge, demonstrates the combination of ink-jetting with substrate surface energy patterning using a self-assembled monolayer to make polymer FETs with 5 micron channel lengths. And if you use as the ink-jetted material something that itself self-assembles on a nanoscale, like a block copolymer, you can achieve nanoscale precision in a hierarchical way (this is a useful trick because it’s a basic fact of physics that you can’t get true long range order in two dimensions in a self-asembling system). This sort of approach is being explored by, among others, Rick Register’s group at Princeton and Georg Krausch’s group at Bayreuth (they often used soft stamping rather than ink-jetting, but the principle is the same).

  11. Clarification of Comment #11:

    When I said that the specific dual probe systems discussed in previous posts “do not work” in ultrahigh vacuum (UHV), I meant that they’re currently not set up to work in this environment (and therefore more technical developments are required) rather than there being anything fundamental preventing their operation in UHV.

    Philip

  12. perhaps one of the fundamental pillars of software engineering, that you can add abstraction layers disregarding lower level details (ie there are no information leaks between layers) is just not applicable here. would be quite ironic if this was the same obstacle holding back the other great promise for revolutionary change (ai).

  13. “perhaps one of the fundamental pillars of software engineering, that you can add abstraction layers disregarding lower level details (ie there are no information leaks between layers) is just not applicable here. ”

    Thanks for that post, David. This is certainly my (and Richard’s) assertion.

    Best wishes,

    Philip

  14. The assertion that abstraction is not useful in this context seems unsupported. Abstractions are useful engineering tools that allow us to define requirements for subsystems. In a sense the transistor is a layer abstracted from the ‚Äúsubsystem‚Äù of quantum mechanics. I agree with the general direction of this debate, in that the devil is in the details, but throwing a tool like abstraction out the window seems… odd.

  15. No-one is asserting that abstraction isn’t useful! The question isn’t whether it’s useful, it’s whether, in this context, it’s actually possible.

  16. Why wouldn’t abstraction be possible? Even a complex system has ranges of predictable behavior, and regions where it more or less ignores some inputs. Certainly some things, like mass, can be abstracted. Above a few nm, many mechanical/structural properties can be abstracted. The claim that no abstraction is possible surely requires some evidence before it’s considered as a potential showstopper.

    Chris

  17. the point (imho) is whether or not abstraction is applicable as a top down design mechanism. the possiblity exists that a top down approach chooses the wrong abstractions. a wrong abstraction is one
    that is either physically infeasible or one that spans a range that exhibits leaks.

    in this scenario top down approach must be substituted by bottom up, where abstractions are chosen based on experiment, and not design.

    anyway just an unqualified opinion, im no expert in nanotech.

  18. Chris, I agree entirely with David on this one; some abstraction may well turn out to be possible, but one can’t assume this at the outset.

    As for whether we think these difficulties are a show-stopper or not, I don’t actually think I said that. I rather suspect I might be more optimistic than you about the degree to which it will be possible to learn to live with and exploit the complexity of these systems. I think its going to be worth following the progress of systems biology very carefully over the next few years, and watching for the ways in which insights from chemical and cell computing get applied back into computer science.

  19. This is a place where people are talking past each other again, it seems to me.

    Chris IMO is saying that some very useful work can and should be done assuming there’s underlying science available. Given the speed of regulation and human appreciation of new technologies, this strikes me as more true than not.

    Philip, Richard and others (still IMO) are saying there needs to be work done on those underpinnings, narrowing them down to ‘real’ reactions that can be demonstrated and reproduced in labs. This is also extremely useful, as it WILL need to be addressed, prior to many of the problems and promises that the broad view offers.

    While Chris’s view is unproven at the scale that Richard, Philip, & others are addressing it, the approach is – as Philip has commented in the past – quite large and contains a large potential solution space. Is it guaranteed that some of the specifics Chris is postulating will come 100% true? Nope. Is it reasonable to assume that SOME percentage of these capabilities might come to be? Well, that’s a personal judgement, isn’t it?

    Nanosystems and the concept of mechanochemistry have not been ‘disproven’, so far as I know. There are many problems to address at the mechanochemical – or whatever alternate atomicly precise method! – level, without a doubt. But are there any solid limitations that prevent the possibility of new tools, new designs from solving these problems?

    That’s a serious question, by the way – Obviously there are problems with many of the suggested implementations thus far (all the ones that I’m aware of, having had serious scrutiny). But do these problems indicate a flat out failure, or is there a missing breakthrough or breakthroughs? How much new science or engineering needs be developed before such capabilities as Chris proposes becomes possible?

    -John

  20. John,

    “While Chris‚Äôs view is unproven at the scale that Richard, Philip, & others are addressing it, the approach is – as Philip has commented in the past – quite large and contains a large potential solution space.”

    This assertion is rather different from what I’ve argued. While *initially* it appears that the parameter space is quite wide, the constraints in terms of material parameters rapidly narrow that space. The statement from my first letter to Chris is as follows:

    “‚ÄúSo, far from delivering the ability to synthesise ‚Äòmost arrangements of atoms that are consistent with physical law‚Äô or to manufacture ‚Äúalmost any‚Ķ product ‚Ķ.that is consistent with physical and chemical law‚Äù, an extremely judicious choice of materials system, possible intermediate/ transition states, diffusion barriers, and symmetry is required to attempt even the initial, most basic and faltering steps in molecular manufacturing. ‚Äú”

    “Nanosystems and the concept of mechanochemistry have not been ‚Äòdisproven‚Äô, so far as I know.”-from preceding post.

    Will a *small* subset of systems support a number of mechanosynthesis steps? I believe that with an appropriate choice of materials/ parameters (e.g. H:C(100) or H:Si(100)) that, yes, it will be possible to carry out a limited number of mechanosynthesis steps. The key issues are:

    (i) Will universal assemblers (involving either an assembler unit that can pick up different tools *or* a family of assemblers where each unit carries out a different reaction) be able to handle all the elements in the periodic table (or all the reactive molecules synthesised by chemists, as Drexler suggests in “Engines of Creation”)? No – the viable parameter space is too narrow. (For example, we need to ensure a combination of close-to-zero dangling bond density, high diffusion barriers, and directional covalent bonds).

    (ii) The surface physics and chemistry that dictate our ‘low level’ mechanosynthesis steps determine which types of surface property/geometry we will have at the higher levels of ‘abstraction’ (and, thus, a complete decoupling of levels of abstraction is simply not possible).

    (iii) Error correction and reliability narrow down the viable parameter space still further. In addition, defect densities must be controlled and eliminated. (On this second point, note that there’s a free energy cost associated with a completely defect free surface (as compared to a surface with a non-zero density of defects) at any finite temperature. This is manifest, for example, in the expression for the Gibbs free energy).

    …but I realise that I’m simply repeating myself at this stage and that this discussion could go round and round in circles ad nauseum. Until the feedback loop between experiment and theory is closed then we simply won’t make any progress. I therefore look forward to discussing the viability of proposed routes to mechanosynthesis with Rob Freitas.

    Best wishes,

    Philip

    (iii)

  21. Philip, what’s the importance of the free energy cost of a defect-free surface? Any interesting system will have high energy relative to the ground state anyway. And in any system with high barriers to diffusion or long-range reconstruction, the presence or absence of defects in *this* square nanometer won’t affect the stability of *that* square nanometer. This free energy has got to be small in comparison with covalent bond energies and barriers–which means, in a system that’s not already at the edge of instability, the energy penalty of being defect-free will not push it over the edge. It’s at most a relatively small correction factor. Or am I missing your point?

    Chris

Comments are closed.