Re-reading Feynman – Part 3

As I discussed in part 1 of this series, Richard Feynman’s lecture “There’s plenty of room at the bottom” is universally regarded as a foundational document for nanotechnology. As people argue about what nanotechnology is and might become, and different groups claim Feynman’s posthumous support for their particular vision, it’s worth looking closely at what the lecture actually said. In part 2 of this series, I looked at the first half of Feynman’s lecture, dealing with writing information on a very small scale, microscopy with better than atomic resolution, and the miniaturisation of computers. In the second part of the lecture, Feynman moved on to discuss the possibilities, first, of making ultra-small machines and ultimately of arranging matter on an atomic level.

  • Small machines
  • Feynman enters this subject by speculating about how one might make miniaturised computers. Why, he asks, can’t we simply make them in the same way as we make big ones? (Recall that at the time he was writing, computers filled rooms). Why can’t we just shrink a machine shop: “Why can’t we drill holes, cut things, solder things, stamp things out, mold different shapes all at an infinitesimal level?”

    The first problem Feynman identifies is the issue of tolerance – a piece of mechanical engineering, like a car, only works because its parts can be machined to a certain tolerance, which he guesses to be around 0.4 thousandths of an inch (this seems plausible for a 50’s American gas guzzler but I suspect that crucial components in modern cars do better than this). He argues that the ultimate limit on tolerance must derive from the inevitable graininess of atoms, and from this deduces that one can shrink mechanical engineering by a factor of about 4000. This implies that a one-centimeter component can be shrunk to about 2.5 microns. Other problems that come with scale include the fact Van der Waals forces become important, so everything sticks to everything else, and that we can’t use heat engines, because heat diffuses away too quickly. On the other hand, lubrication might get easier for the same reason. So we’ll need to do some things differently on small scales: “There will be several problems of this nature that we will have to be ready to design for”

    How are we going to make these devices? Feynman leaves the question open, but he makes one suggestion, recalling the remote handling devices people build to handle radioactive materials, levers that remotely operate mechanical hands: “Now, I want to build much the same device—a master-slave system which operates electrically. But I want the slaves to be made especially carefully by modern large-scale machinists so that they are one-fourth the scale of the “hands” that you ordinarily maneuver. So you have a scheme by which you can do things at one- quarter scale anyway—the little servo motors with little hands play with little nuts and bolts; they drill little holes; they are four times smaller.” And then you use the littler hands to make hands that are even smaller, and so on, until you have a set of machine tools at 1/4000th scale. The need to refine the accuracy of your machines at each stage of miniaturisation makes this, as Feynman concedes, “a very long and very difficult program. Perhaps you can figure a better way than that to get down to small scale more rapidly.”

    Reading this with the unfair benefit of hindsight, two things strike me. We do now have mechanical devices that operate on the length scales Feynman is envisioning here, upwards of a few microns. These micro-electromechanical systems (MEMS) are commercialised for example, in the accelerometers that activate car airbags. For an example of a company active in this field, take a look at Crossbow Technology. But the methods by which these MEMS devices are made very different to the scheme Feynman had in mind; just as in the case of computer miniaturisation it’s the planar processes of photolithography and etching that allow one to get down to this level of miniaturisation in a single step.

    Returning to Feynman’s idea of the master-slave system in which you input a large motion, and output a much smaller one, we do now have available such a device which can effectively get us not just to the microscale, but to the nanoscale, in a single step. The principle this depends on – the use of piezoelectricity to convert a voltage into a tiny change in dimensions of a particular type of crystal – was well known in 1960, and the material that proves to do the job best – the ceramic lead zirconium titanate (PZT) – had been on the market since 1952. I don’t know when or where the idea of using this material to make controlled, nanoscale motions was first developed, but between 1969 and 1972 David Tabor, at the Cavendish Laboratory in Cambridge, was using PZT for sub-nanometer positional control in the surface forces apparatus which he developed with his students Winterton and Israelachvili. Most famously, PZT nano-actuators were the basis for the scanning tunneling microscope, invented in 1981 by the Nobel laureates Binnig and Rohrer, and the atomic force microscope invented a few years later. As we’ll see, it’s this technology that has allowed the realisation of Feynman’s vision of atom-by-atom control.

    Why would you want to make all these tiny machines? Characteristically, the dominant motive for Feynman seems to be for fun, but he throws out one momentous suggestion, attributed to a friend: “it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and “looks” around.” Thus the idea of the medical nanobot is launched, only a few years before achieving wide-screen fame in Fantastic Voyage.

  • Rearranging matter atom by atom
  • Here Feynman asks the ultimate question “What would happen if we could arrange the atoms one by one the way we want them?” The motivation for this is that we would be able to get materials with entirely new properties: “What would the properties of materials be if we could really arrange the atoms the way we want them? They would be very interesting to investigate theoretically. I can’t see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a small scale we will get an enormously greater range of possible properties that substances can have, and of different things that we can do.”

    We do now have some idea of the possibilities that such control would offer. The first, easiest problem that Feynman poses is: “What could we do with layered structures with just the right layers?” The development of molecular beam epitaxy and chemical vapour deposition has made this possible, and just as Feynman anticipated the results have been spectacular. In effect, controlling the structure of compound semiconductors on the nanoscale – making semiconductor heterostructures allows one to create new materials with exactly the electronic properties you want, to make, for example, light emitting diodes and lasers with characteristics that would be unavailable from simple materials. Alferov and Kroemer won the Nobel Prize in Physics in 2000 (with Jack Kilby) for their work on heterostructure lasers. This work is gaining even more commercial importance with the discovery of a way of making blue heterostructure LEDs and lasers by Nakamura, opening the way for using light emitting diodes as a highly energy efficient light-source. Meanwhile new generations of quantum dot and quantum well lasers find uses in the optical communication systems that underly the workings of the internet. You can see an example of the kind of thing that’s been done in a number of labs around the world in this post about work done at Sheffield by my colleague Maurice Skolnick.

    This kind of semiconductor nanotechnology still doesn’t quite achieve atomic precision, though. This is Feynman’s ultimate goal: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. “. On this scale, Feynman forsees entirely new possibilities; “We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc. “ Some of these ideas are already being realised; quantum dots (even though they are made with slightly less than atomic precision) display quantised energy levels deriving from their size, and the manipulation of spins in such quantised systems is at the heart of the ideas of spintronics and may provide a way of realising quantum computing (another field which Feynman was the first to anticipate). Feynman points out another advantage of making this with atomic precision: the ability to make exact reproductions of the things we make: “But if your machine is only 100 atoms high, you only have to get it correct to one-half of one percent to make sure the other machine is exactly the same size—namely, 100 atoms high! “

    Don Eigler, of IBM, demonstrated the possibility of single atom manipulation in 1990 with this famous image of the letters IBM picked out in xenon atoms. Given this capability, what can one usefully do with it? Feynman suggests that it might prove a different route to doing chemistry: “But it is interesting that it would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. Give the orders and the physicist synthesizes it. How? Put the atoms down where the chemist says, and so you make the substance. “ Progress towards this goal has been very slow, emphasising just how hard the Eigler experiments were. Philip Moriarty provided an excellent summary of what has been achieved in his correspondence with Chris Phoenix, available as a PDF here. Feynman himself anticipated that this wouldn’t be easy: “By the time I get my devices working, so that we can do it by physics, he will have figured out how to synthesize absolutely anything, so that this will really be useless.” Nonetheless, Feynman stresses the value of these developments for science: “The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed—a development which I think cannot be avoided. “

    Now we’ve gone back to the original source to see what Feynman actually said, in my final installment, I’ll assess what validity there is to the various competing claims to the endorsement of Feynman for particular visions of nanotechnology.

    Re-reading Feynman – Part 2

    In part 1 of this series I talked about the growing importance of Richard Feynman’s famous lecture There’s plenty of room at the bottom as a foundational document for nanotechnology of all flavours, and hinted at the tensions that arise as different groups claim Feynman’s vision as an endorsement for their own particular views. Here I want to go back to Feynman’s own words to try and unpick exactly what Feynman’s vision was, and how it looks more than forty years on.

    Feynman’s lecture actually covers a number of different topics related to miniaturisation. We can break up the lecture into a number of themes:

  • Writing small
  • Feynman starts with the typically direct and compelling question “Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?” Simple arithmetic convinces us that this is possible in principle; using a pixel size of 8 nm gives us enough resolution. So how in practise can it be done? Reading such small writing is no problem, and would have been possible even with the electron microscopy techniques available in 1959. Writing on this scale is more challenging, and Feynman threw out some ideas about using focused electron and ion beams. Although Feynman didn’t mention it, the basic work to enable this was already in progress at the time he was speaking. Cambridge was one of the places at which the scanning electron microscope was being developed (history here), and only a year a two later the first steps were being made in using focused beams to make tiny structures. The young graduate student who worked on this was the same Alec Broers who (now enobled) recently attracted the wrath of Drexler. This was the beginning of the technique of electron-beam lithography, now the most widely used method of making nanoscale structures in industry and academia.

  • Better microscopes
  • Electron microscopes in 1959 couldn’t resolve features smaller than 1 nm. This is impressively small, but it was still not quite good enough to see individual atoms. Feynman knew that there were no fundamental reasons preventing the resolution of electron microscopes being improved by a factor of 100, and he identified the problem that needed to be overcome (the numerical aperture of the lenses). Feynman’s goal of obtaining sub-atomic resolution in electron microscopes has now been achieved, but for various rather interesting reasons this development has had less impact than he anticipated.

    Feynman, above all, saw microscopy with sub-atomic resolution as a direct way of solving the mysteries of biology. “It is very easy to answer many of these fundamental biological questions; you just look at the thing! You will see the order of bases in the [DNA] chain; you will see the structure of the microsome”. But although microscopes are 100 times better, we still can’t directly sequence DNA microscopically. It turns out that the practical resolution isn’t limited by the instrument, but by the characteristics of biological molecules – particularly their tendency to get damaged by electron beams. This situation hasn’t been materially altered by the remarkable and exciting discovery of a whole new class of microscopy techniques with the potential to achieve atomic resolution – the scanning probe techniques like scanning tunneling microscopy and atomic force microscopy. Meanwhile many of the problems of structural biology have been solved, not by microscopy, but by x-ray diffraction.

  • Miniaturising the computer
  • The natural reaction of anyone under forty reading this section is shock, and that’s a measure of how far we’ve come since 1959. Feynman writes “I do know that computing machines are very large; they fill rooms” … younger readers need to be reminded that the time when a computer wasn’t a box on a desktop or a slab on a laptop is within living memory. In discussing the problems of making a computer powerful enough to solve a difficult problem like recognizing a face, Feynman comments ” there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing”. Now our transistors are made of silicon, but more importantly they aren’t discrete elements that need to be soldered together, they are patterned on a single piece of silicon as part of a planar integrated circuit. It’s this move to this new kind of manufacturing, based on a combination of lithographic patterning, etching and depositing very thin layers, that has permitted the extraordinary progress in the miniaturization of computers

    Feynman asks “Why can’t we manufacture these small computers somewhat like we manufacture the big ones?” The question has been superseded, to some extent, by the discovery of this better way of doing things. This discovery was already in sight at the time Feynman was writing; the two crucial patents for integrated circuits were filed by Jack Kilby and Robert Noyce early in 1959, but their significance didn’t become apparent for a few more years. This has been so effective that Feynman’s miniaturization goal – “the circuits should be a few thousand angstroms across” – has already been met, not just in the laboratory, but in consumer goods costing a few hundred dollars apiece.

    So far, then, we can see that much of Feynman’s vision has actually been realised, though some things haven’t worked out the way he anticipated. In the next section of this series I’ll consider what he said about miniature machines and rearranging matter atom by atom. It’s here, of course, that the controversy over Feynman’s legacy becomes most pointed.

    Re-reading Feynman (part 1)

    Every movement has its founding texts; for nanotechnology there’s general agreement that Richard Feynman’s lecture There’s plenty of room at the bottom is where the subject started, at least as a concept. The lecture is more than forty years old, but I sense that its perceived significance has been growing in recent years. Not least of the reasons for this is that, as the rift between the mainstream of academic and commercial nano- science and technology and the supporters of Drexler has been growing, both sides, for different reasons, find it convenient to emphasis the foundational role of Richard Feynman. Drexler himself often refers to his vision of nanotechnology as the “Feynman vision”, thus explicitly claiming the endorsement of someone many regard as the greatest native-born American scientist of all time. For mainstream nanoscientists, on the other hand, increasing the prominence given to Feynman has the welcome side-effect of diminishing the influence of Drexler.

    Many such founding documents easily slip into the category of papers that are “much-cited, but seldom read”, particularly when they were published in obscure publications that aren’t archived on the web. Feynman’s lecture is easily available, so there’s no excuse for this fate befalling it now. Nonetheless, one doesn’t often read very much about what Feynman actually said. This is a pity, not because his predictions of the future were flawless, nor because he presented a coherent plan that nanotechnologists today should be trying to follow. Feynman was a brilliant theoretical physicist observing science and technology as it was in 1959. It’s fascinating, as we try to grope towards an understanding of where technology might lead us in the next forty years, to look back at these predictions and suggestions. Some of what he predicted has already happened, to an extent that probably would have astonished him at the time. In other cases, things haven’t turned out the way he thought they would. We’ve seen some spectacular breakthroughs that were completely unanticipated. Finally, Feynman suggested some directions that as yet have not happened, and whose feasibility isn’t yet established. In my next post in this series, I’ll use the luxury of hindsight to look in detail at Plenty of Room at the Bottom, to ask just how well Feynman’s predictions and hunches have stood the test of time.

    What biology does and doesn’t prove about nanotechnology

    The recent comments by Alec Broers in his Reith Lecture about the feasibility or otherwise of the Drexlerian flavour of molecular nanotechnology have sparked off a debate that seems to have picked up some of the character of the British general election campaign (Liar! Vampire!! Drunkard!!! ). See here for Howard Lovy’s take, here for TNTlogs view. All of this prompted an intervention by Drexler himself (channeled through Howard Lovy), which was treated with less than total respect by TNTlog. Meanwhile, Howard Lovy visited Soft Machines to tell us that “when it comes to being blatantly political, you scientists are just as clumsy about it as any corrupt city politician I’ve covered in my career. The only difference is that you (I don’t mean you, personally) can sound incredibly smart while you lie and distort to get your way.” Time, I think (as a politician would say), to return to the issues.

    Philip Moriarty, in his comment on Drexler’s letter, makes, as usual, some very important points about the practicalities of mechanosynthesis. Here I want to look at what I think is the strongest argument that supporters of radical nanotechnologies have, the argument that the very existence of the amazing contrivances of cell biology shows us that radical nanotechnology must be possible. I’ve written on this theme often before (for example here), but it’s so important it’s worth returning to.

    In Drexler’s own words, in this essay for the AAAS, “Biology shows that molecular machines can exist, can be programmed with genetic data, and can build more molecular machines”. This argument is clearly absolutely correct, and Drexler deserves credit for highlighting this important idea in his book Engines of Creation. But we need to pursue the argument a little bit further than the proponents of molecular manufacturing generally take it.

    Cell biology shows us that it is possible to make sophisticated molecular machines that can operate, in some circumstances, with atomic precision, and which can replicate themselves. What it does not show is that the approach to making molecular machines outlined in Drexler’s book Nanosystems, an approach that Drexler describes in that book as “the principles of mechanical engineering applied to chemistry”, will work. The crucial point is that the molecular machines of biology work on very different principles to those used by our macroscopic products of mechanical engineering. This is much clearer now than it was when Engines of Creation was written, because in the ensuing 20 years there’s been spectacular progress in structural biology and single molecule biophysics; this progress has unravelled the operating details of many biological molecular machines and has allowed us to understand much more deeply the design philosophy that underlies them. I’ve tried to explain this design philosophy in my book Soft Machines; for a much more technical account, with full mathematical and physical details, the excellent textbook by Phil Nelson, Biological Physics: Energy, Information, Life, is the place to go.

    Where Drexler takes the argument next is to say that, if nature can achieve such marvelous devices using materials whose properties, constrained by the accidents of evolution, are far from optimal, and using essentially random design principles, then how much more effective will our synthetic nano-machines be. We can use hard, stiff materials like diamond, rather than the soft, wet and jelly-like components of biology, and we can use the rationally designed products of a mechanical engineering approach rather than the ramshackle and jury-rigged contrivances of biology. In Drexler’s own words, we can expect “molecular machine systems that are as far from the biological model as a jet aircraft is from a bird, or a telescope is from an eye”.

    There’s something wrong with this argument, though. The shortcomings of biological design are very obvious at the macroscopic scale – 747s are more effective at flying than crows, and, like many over-40 year olds, I can personally testify to the inadequacy of the tendon arrangements in the knee-joint. But the smaller we go in biology, the better things seem to work. My favourite example of this is ATP-synthase. This remarkable nanoscale machine is an energy conversion device that is shared by living creatures as different as bacteria and elephants (and indeed, ourselves). It converts the chemical energy of a hydrogen ion gradient, first into mechanical energy of rotation, and then into chemical energy again, in the form of the energy molecular ATP, and it does this with an efficiency approaching 100%.

    Why does biology work so well at the nanoscale? I think the reason is related to the by now well-known fact that physics looks very different on the nanoscale than it does at the macroscale. In the environment we live in – with temperatures around 300 K and a lot of water around – what dominates the physics of the nanoscale is ubiquitous Brownian motion (the continuous jostling of everything by thermal motion), strong surface forces (which tend to make most things stick together), and, in water, the complete dominance of viscosity over inertia, making water behave at the nanoscale in the way molasses behaves on human scales. The kind of nanotechnology biology uses exploits these peculiarly nanoscale phenomena. It uses design principles which are completely unknown in the macroscopic world of mechanical engineering. These principles include self-assembly, in which strong surface forces and Brownian motion combine to allow complex structures to form spontaneously from their component parts. The lack of stiffness of biological molecules, and the importance of Brownian motion in continuously buffeting them, is exploited in the principle of molecular shape change as a mechanism for doing mechanical work in the molecular motors that make our muscles function. These biological nanomachines are exquisitely optimised for the nanoscale world in which they operate.

    It’s important to be clear that I’m not accusing Drexler of failing to appreciate the importance of nanoscale phenomena like Brownian motion; they’re treated in some detail in Nanosystems. But the mechanical engineering approach to nanotechnology – the Nanosystems approach – treats these phenomena as problems to be engineered around. Biology doesn’t engineer around them, though, it’s found ways of exploiting them.

    My view, then, is that the mechanical engineering approach to nanotechnology that underlies MNT is less likely to succeed than an approach that seeks to emulate the design principles of nature. MNT is working against the grain of nanoscale physics, while the biological approach – the soft, wet, and flexible approach, works with the grain of the way the nanoscale works. Appealing to biology to prove the possibility of radical nanotechnology of some kind is absolutely legitimate, but the logic of this argument doesn’t lead to MNT.

    Politics and the National Nanotechnology Initiative

    The view that the nanobusiness and nanoscience establishment has subverted the originally intended purpose of the USA’s National Nanotechnology Initiative has become received wisdom amongst supporters of the Drexlerian vision of MNT. According to this reading of nanotechnology politics,
    any element of support for Drexler’s vision for radical nanotechnology has been stripped out of the NNI to make it safe for mundane near-term applications of incremental nanotechnology like stain resistant fabric. This position is succintly expressed in this Editorial in the New Atlantis, which makes the claim that the legislators who supported the NNI did so in the belief that it was the Drexlerian vision that they were endorsing.

    A couple of points about this position worry me. Firstly, we should be very clear that there is a very important dividing line in the relationship between science and politics that any country should be very wary of crossing. In a democratic country, it’s absolutely right that the people’s elected representatives should have the final say about what areas of science and technology are prioritised for public spending, and indeed what areas of science are left unpursued. But we need to be very careful to make sure that this political oversight of science doesn’t spill over into ideological statements about the validity of particular scientific positions. If supporters of MNT were to argue that the government should overrule the judgement of the scientific community about what approach to radical nanotechnology is most likely to work on what are essentially ideological grounds, then I’d suggest they recall the tragic and unedifying history of similar interventions in the past. Biology in the Soviet Union was set back for a generation by Lysenko, who, unable to persuade his colleagues of the validity of his theory of genetics, appealed directly to Stalin. Such perversions aren’t restricted to totalitarian states; Edward Teller used his high level political connections to impose his vision of the x-ray laser on the USA’s defense research establishment, in the face of almost universal scepticism from other physicists. The physicists were right, and the program was abandoned, but not before the waste of many billions of dollars.

    But there’s a more immediate criticism of the theory that the NNI has been highjacked by nanopants. This is that it’s not right, even from the point of view of supporters of Drexler. The muddle and inconsistency comes across most clearly on the Center for Responsible Nanotechnology’s
    blog. While this entry strongly endorses the New Atlantis line, this entry only a few weeks earlier expresses the opinion that the most likely route to radical nanotechnology will come through wet, soft and biomimetic approaches. Of course, I agree with this (though my vision of what radical nanotechnology will look like is very different from that of supporters of MNT); it is the position I take in my book Soft Machines; it is also, of course, an approach recommended by Drexler himself. Looking across at the USA, I see some great and innovative science being done along these lines. Just look at the work of Ned Seeman, Chad Mirkin, Angela Belcher or Carlo Montemagno, to take four examples that come immediately to mind. Who is funding this kind of work? It certainly isn’t the Foresight Institute – no, it’s all those government agencies that make up the much castigated National Nanotechnology Initiative.

    Of course, supporters of MNT will say that, although this work may be moving in the direction that they think will lead to MNT, it isn’t been done with that goal explicitly stated. To this, I would simply ask whether it isn’t a tiny bit arrogant of the MNT visionaries to think that they are in a better position to predict the outcome of these lines of inquiry than the people who are actually doing the research.

    Whenever science funding is allocated, there is a real tension between the short-term and the long-term, and this is a legitimate bone of contention between politicians and legislators, who want to see immediate results in terms of money and jobs for the people they represent, and scientists and technologists with longer term goals. If MNT supporters were simply to argue that the emphasis of the NNI should be moved away from incremental applications towards longer term, more speculative research, then they’d find a lot of common cause with many nanoscientists. But it doesn’t do anyone any good to confuse these truly difficult issues with elaborate conspiracy theories.

    Nobel Laureates Against Nanotechnology

    This small but distinguished organisation has gained another two members. The theoretical condensed matter physicist Robert Laughlin, in his new book A Different Universe: reinventing physics from the bottom down, has a rather scathing assessment of nanotechnology, with which Philip Anderson (who is himself a Nobel Laureate and a giant of theoretical physics), reviewing the book in Nature(subscription required), concurs. Unlike Richard Smalley, Laughlin’s criticism is directed at the academic version of nanotechnology, rather than the Drexlerian version, but adherents of the latter shouldn’t feel too smug because Laughlin’s criticism applies with even more force to their vision. He blames the seductive power of reductionist belief for the delusion: “The idea that nanoscale objects ought to be controllable is so compelling it blinds a person to the overwhelming evidence that they cannot be”.

    Nanotechnologists aren’t the only people singled out for Laughlin’s scorn. Other targets include quantum computing, string theory (“the tragic consequence of an obsolete belief system”) and most of modern biology (“an endless and unimaginably expensive quagmire of bad experiments”). But underneath all the iconoclasm and attitude (and personally I blame Richard Feynman for making all American theoretical physicists want to come across like rock stars), is a very serious message.

    Laughlin’s argument is that reductionism should be superseded as the ruling ideology of science by the idea of emergence. To quote Anderson “The central theme of the book is the triumph of emergence over reductionism: that large objects such as ourselves are the product of principles of organization and of collective behaviour that cannot in any meaningful sense be reduced to the behaviour of our elementary constituents.” The origin of this idea is Anderson himself, in a widely quoted article from 1971 – More is different. In this view, the idea that physics can find a “Theory of Everything” is fundamentally wrong-headed. Chemistry isn’t simply the application of quantum mechanics, and biology is not simply reducible to chemistry; the organisation principles that underlie, say, the laws of genetics, are just as important as the properties of the things being organised.

    Anderson’s views on emergence aren’t as widely known as they should be, in a world dominated by popular science books on string theory and “the search for the God particle”. But they have been influential; an intervention by Anderson is credited or blamed by many people for killing off the Superconducting Supercollider project, and he is one of the founding fathers of the field of complexity. Laughlin explicitly acknowledges his debt to Anderson, but he holds to a particularly strong version of emergence; it isn’t just that there are difficulties in practise in deriving higher level laws of organisation from the laws describing the interactions of their parts. Because the organisational principles themselves are more important than the detailed nature of the interactions between the things being organised, the reductionist program is wrong in principle, and there’s no sense in which the laws of quantum electrodynamics are more fundamental than the laws of genetics (in fact, Laughlin argues on the basis of the strong analogies between QED and condensed matter field theory that QED itself is probably emergent). To my (philosophically untrained) eye, this seems to put Laughlin’s position quite close to that of the philosopher of science Nancy Cartwright. There’s some irony in this, because Cartwright’s book The Dappled World was bitterly criticised by Anderson himself.

    This takes us a long way from nanoscience and nanotechnology. It’s not that Laughlin believes that the field is unimportant; in fact he describes the place where nanoscale physics and biology meets as being the current frontier of science. But it’s a place that will only be understood in terms of emergent properties. Some of these, like self-assembly, are starting to be understood, but many others are not. But what is clear is that the reductionist approach of trying to impose simplicity where it doesn’t exist in nature simply won’t work.

    Debating nanotechnologies

    To the newcomer, the nanotechnology debate must be very confusing. The idea of a debate implies two sides, but there are many actors debating nanotechnology, and they don’t even share a common understanding of what the word means. The following extended post summarises my view of this many-faceted discussion. Regular readers of Soft Machines will recognise all the themes, but I hope that newcomers will find it helpful to find them all in one place.

    Nanotechnology has become associated with some very far-reaching claims. Its more enthusiastic adherents believe that it will be utterly transformational in its effects on the economy and society, making material goods of all sorts so abundant as to be essentially free, restoring the environment to a pristine condition, and revolutionising medicine to the point where death can be abolished. Nanotechnology has been embraced by governments all over the world as a source of new wealth, with the potential to take the place of information technology as a driver for rapid economic growth. Breathless extrapolations of a new, trillion-dollar nanotechnology industry arising from nowhere are commonplace. These optimistic visions have led to new funding being lavished on scientists working on nanotechnology, with the total amount being spent a subject for competition between governments across the developed world. As an antidote to all this optimism, NGOs and environmental groups have begun to mobilise against what they see as another example of excessive scientific technological hubris, which falls clearly in the tradition of nuclear energy and genetic modification, as a technology which promised great things but delivered, in their view, more environmental degradation and social injustice.

    And yet, despite this superficial agreement on the transformational power of nanotechnology, whether for good or bad, there are profound disagreements not just about what the technology can deliver, but about what it actually is. The most radical visions originate from the writings of K. Eric Drexler, who wrote an influential and widely read book called “Engines of Creation”. This popularised the term “nanotechnology”, developing the idea that mechanical engineering principles could be applied on a molecular scale to create nano-machines which could build up any desired material or artefact with ultimate precision, atom by atom. It is this vision of nanotechnology, subsequently developed by Drexler in his more technical book Nanosystems, that has entered popular culture through films and science fiction books, perhaps most notably in Neal Stephenson’s novel “The Diamond Age”.

    To many scientists, science fiction novels are where Drexler’s visions of nanotechnology should stay. In a falling out which has become personally vituperative, leading scientific establishment figures, notably the Nobel Laureate Richard Smalley, have publically ridiculed the Drexlerian project of shrinking mechanical engineering to molecular dimensions. What is dominating the scientific research agenda is not the single Drexlerian vision, but instead a rather heterogenous collection of technologies, whose common factor is simply a question of scale. These evolutionary nanotechnologies typically involve the shrinking down of existing technologies, notably in information technology, to smaller and smaller scales. Some of the products of these developments are already in the shops. The very small, high density hard disk drives that are now found not just in computers, but in consumer electronics like MP3 players and digital video recorders, rely on the ability to create nanoscale multilayer structures which have entirely new physical properties like giant magnetoresistance. Not yet escaped from the laboratory are new technologies like molecular electronics, in which individual molecules play the role of electronic components. Formidable obstacles remain before these technologies can be integrated to form practical devices that can be commercialised, but the promise is yet another dramatic increase in computing power. Medicine should also benefit from the development of more sophisticated drug delivery devices; this kind of nanotechnology will also play a major role in the development of tissue engineering.

    What of the products that are already on shop shelves, boasting of their nanotechnological antecedents? There are two very well publicised examples. The active ingredient in some sunscreens consists of titanium dioxide crystals whose sizes are in the nanoscale range. In this size range, the crystals, and thus the sunscreen, are transparent to visible light, rather than having the intense white characteristic of the larger titanium dioxide crystals familiar in white emulsion paint. Another widely reported applications of nanotechnology are in fabric treatments, which by coating textile fibres with molecular size layers give them properties such as stain resistance. These applications, although mundane, result from the principle that matter when divided on this very fine scale, can have different properties to bulk matter. However, it has to be said that these kinds of products represent the further development of trends in materials science, colloid science and polymer science that have been in train for many years. This kind of incremental nanotechnology, then, does involve new and innovative science, but it isn’t different in character to other applications of materials science that may not have the nano- label. To this extent, the decision to refer to these applications as nanotechnology involves marketing as much as science. But what we will see in the future are more and more of this kind of application making their way to the marketplace, offering real, if not revolutionary, advances over the products that have gone before. These developments won’t be introduced in a single “nanotechnology industry”; rather these innovations will find their way into the products of all kinds of existing industries, often in rather an unobtrusive way.

    The idea of a radical nanotechnology, along the lines mapped out by Drexler and his followers, has thus been marginalised on two fronts. Those interested in developing the immediate business applications of nanotechnology have concentrated on the incremental developments that are close to bringing products to market now, and are keen to downplay the radical visions because they detract from the immediate business credibility of their short-term offerings. Meanwhile the nano-science community is energetically pursuing a different evolutionary agenda. Is it possible that both scientists and the nanobusiness community are too eagerly dismissing Drexler’s ideas – could there be, after all, something in the idea of a radical nanotechnology?

    My personal view is that while some of Smalley’s specific objections don’t hold up in detail, and it is difficult to dismiss the Drexlerian proposals out of hand as being contrary to the laws of nature, the practical obstacles they face are very large. To quote Philip Moriarty, an academic nanoscientist with a great deal of experience of manipulating single molecules, “the devil is in the details”, and as soon as one starts thinking through how one might experimentally implement the Drexlerian program a host of practical problems emerge.

    But one aspect of Drexler’s argument is very important, and undoubtedly correct. We know that a radical nanotechnology, with sophisticated nanoscale machines operating on the molecular scale, can exist, because cell biology is full of such machines. This is beautifully illustrated in David Goodsell’s recent book Bionanotechnology: Lessons from Nature. But Drexler goes further. He argues that if nature can make effective nanomachines from soft and floppy materials, with the essentially random design processes of evolution, then the products of a synthetic nanotechnology, using the strongest materials and the insights of engineering, will be very much more effective. My own view (developed my book “Soft Machines”) is that this underestimates the way in which biological nanotechnology exploits and is optimised for the peculiar features of the nanoscale world. To take just one example of a highly efficient biological nanomachine, ATP-synthase is a remarkable rotary motor which life-forms as different as bacteria and elephants all use to synthesise the energy storage molecular ATP. The efficiency with which it converts energy from one form to another is very close to 100%, a remarkable result when one considers that most human-engineered energy conversion devices, such as steam turbines and petrol engines, struggle to exceed 50% efficiency. This is one example, then, of a biological nanomachine that is close to optimal. The reason for this is that biology uses design principles very different to those we learn about in human-scale engineering, that exploit the special features of the nanoworld. There’s no reason in principle why we could not develop a radical nanotechnology that uses the same design principles as biology, but the result will look very different to the miniaturised cogs and gears of the Drexlerian vision. Radical nanotechnologies will be possible, then, but they will owe more to biology than to conventional engineering.

    Discussion of the possible impacts of nanotechnology, both positive and negative, has shown signs of becoming polarised along the same lines as the technical discussion. The followers of Drexler promise on the on hand a world of abundance of all material needs, and an end to disease and death. But they’ve also introduced perhaps the most persistent and gripping notion – the idea that artificial, self-replicating nanoscale robots would escape our control and reproduce indefinitely, consuming all the world’s resources, and rendering existing life extinct. The idea of this plague of “grey goo” has become firmly embedded in our cultural consciousness, despite some indications of regret from Drexler, who has more lately emphasised the idea that self-replication is neither a desirable nor a necessary feature of a nanoscale robot. The reaction of nano-scientists and business people to the idea of “grey goo” has been open ridicule. Actually, it is worth taking the idea seriously enough to give it a critical examination. Implicit in the notion of “grey goo” is the assumption that we will be able to engineer what is effectively a new form of life that is more fit, in a Darwinian sense, and better able to prosper in the earth’s environment than existing life-forms. On the other hand, the argument that biology at the cell level is already close to optimal for the environment of the earth means that the idea that synthetic nano-robots will have an effortless superiority over natural lifeforms is much more difficult to sustain.

    Meanwhile, mainstream nanobusiness and nanoscience has concentrated on one very short-term danger, the possibility that new nanoparticles may be more toxic than their macroscale analogues and precursors. This fear is very far from groundless; since one of the major selling points of nanoparticles is that their properties may be different from the analogous matter in a less finely divided state, it isn’t at all unreasonable to worry that toxicity may be another property that depends on size. But I can’t help feeling that there is something odd about the way the debate has become so focused on this one issue; it’s an unlikely alliance of convenience between nanobusiness, nanoscience, government and the environmental movement, all of whom have different reasons for finding it a convenient focus. For the environmental movement, it fits a well-established narrative of reckless corporate interests releasing toxic agents into the environment without due care and attention. For nanoscientists, it’s a very contained problem which suggests a well-defined research agenda (and the need for more funding). By tinkering with regulatory frameworks, governments can be seen to be doing something, and nanobusiness can demonstrate their responsibility by their active participation in the process.

    The dominance of nanoparticle toxicity in the debate is a vivid illustration of a danger that James Wilsdon has drawn attention to – the tendency for all debates on the impact of science on society to end up exclusively focused on risk assessment. In the words of a pamphlet by Willis and Wilsdon – “See-through Science” – “in the ‘risk society’ perhaps the biggest risk is that we never get around to talking about anything else.” Nanotechnology – even in its evolutionary form – presents us with plenty of very serious things to talk about. How will privacy and civil liberties survive in a world in which every artefact, no matter how cheap, includes a networked computer? How will medical ethics deal with a blurring of the line between the human and the machine, and the line between remedying illness and enhancing human capabilities?

    Some people argue that new technologies like nanotechnology are potentially so dehumanising that we should consciously relinquish them. Bill McKibben, for example, makes this case very eloquently in his book “Enough“. Although I have a great deal of sympathy with McKibben’s rejection of the values of the trans-humanists, who consciously seek to transcend humanity, I don’t think the basic premise of McKibben’s thesis is tenable. The technology we have already is not enough. Mankind currently depends for its very existence at current population levels on technology. To take just one example, our agriculture depends on the artificial fixation of nitrogen, which is made possible by the energy we derive from fossil fuels. And yet the shortcomings of our existing technologies are quite obvious, from the eutrophication that excessive use of synthetic fertilisers causes, to the prospect of global climate change as a result of our dependence on fossil fuels. As the population of the world begins to stabilise, we have the challenge of developing new technologies that will allow for the whole population of the world to have decent standards of living on a sustainable basis. Nanotechnology could play an important role, for example by delivering cheap solar cells and the infrastructure for a hydrogen economy, together with cheap ways of providing clean water. But there’ll need to be real debates about how to set priorities so that the technology bring benefits to the poor as well as the rich.

    Bits and Atoms

    I recently made a post – Making and doing – about the importance of moving the focus of radical nanotechnology away from the question of how artefacts are to be made, and towards a deeper consideration of how they will function. I concluded with the provocative slogan Matter is not digital. My provocation has been rewarded with detailed attempts to rebut my argument from both Chris Peterson, VP of the Foresight Institute, on Nanodot, and Chris Phoenix of the Center for Responsible Nanotechnology, on the CRNano blog. Here’s my response to some of the issues they raise.

    First of all, on the basic importance of manufacturing:

    Chris Peterson: Yes, but as has been repeatedly pointed out, we need better systems that make things in order to build better systems that do things. Manufacturing may be a boring word compared to energy, information, and medicine, but it is fundamental to all.

    Manufacturing will always be important; things need to be made. My point is that by becoming so enamoured with one particular manufacturing technique, we run the risk of choosing materials to suit the manufacturing process rather than the function that we want our artefact to accomplish. To take a present-day example, injection moulding is a great manufacturing method. It’s fast, cheap, can make very complex parts with high dimensional fidelity. Of course it only works with thermoplastics; sometimes this is fine but everytime you eat with a plastic knife you expose yourself to the results of sub-optimal materials choice forced on you by the needs of a manufacturing process. Will MNT similarly limit the materials choices that you can make? I believe so.

    Chris Peterson: But isn’t it the case that we already have ways to represent 3D molecular structures in code, including atom types and bonds?

    Certainly we can represent structures virtually in code; the issue is whether we can output that code to form physical matter. For this we need some basic, low level machine code procedures from which complex algorthms can be built up. Such a procedure would look something like: depassivate point A on a surface. Pick up building block from resevoir B. Move it to point A. Carry out mechanosynthesis step to bond it to point A. Repassivate if necessary. Much of the debate between Chris Phoenix and Philip Moriarty concerned the constraints that surface physics put on the sorts of procedures you might use. In particular, note the importance of the idea of surface reconstructions. The absence of such reconstructions is one of the main reasons why hydrogen passivated diamond is by far the best candidate for a working material for mechanosynthesis. This begins to answer Chris Peterson’s next question…

    Chris Peterson: How did we get into the position of needing to use only one material here?

    …which is further answered by Chris Phoenix’s explanation of why matter can be treated with digital design principles, which focuses on the non-linear nature of covalent bonding:

    Chris Phoenix: Forces between atoms as they bond are also nonlinear. As you push them together, they “snap” into position. That allows maintenance of mechanical precision: it’s not hard, in theory, for a molecular manufacturing system to make a product fully as precise as itself. So covalent bonds between atoms are analogous to transistors. Individual bonds correspond to the ones and zeros level.

    So it looks like we’re having to restrict ourselves to covalently bonded solids. Goodbye to metals, ionic solids, molecular solids, macromolecular solids… it looks like we’re now stuck with choosing among the group 4 elements, the classical compound semiconductors and other compounds of elements in groups 3-6. Of these, diamond seems the best choice. But are we stuck with a single material? Chris Phoenix thinks not…

    Chris Phoenix: By distinguishing between the nonlinear, precision-preserving level (transistors and bonding) and the level of programmable operations (assembly language and mechanosynthetic operations), it should be clear that the digital approach to mechanosynthesis is not a limitation, and in particular does not limit us to one material. But for convenience, an efficient system will probably produce only a few materials.

    This analogy is flawed. In a microprocessor, all the transistors are the same. In a material, the bonds are not the same. This is obviously true if the material contains more than one atom, and even if the material only has one type of atom the bonds won’t be the same if the working surface has any non-trivial topography – hence the importance of steps and edges in surface chemistry. If the bonds don’t behave in the same way, a mechanosynthetic step which works with one bond won’t work with another, and your simple assembly language becomes a rapidly proliferating babel of different operations all of which need to be individually optimised.

    Chris Phoenix: For nanoscale operations like binding arbitrary molecules, it remains to be seen how difficult it will be to achieve near-universal competence.

    I completely agree with this. A classic target for advanced nanomedicine would be to have a surface which resisted non-specific binding of macromolecules, but recognised one specific molecular target and produced a response on binding. I find it difficult to see how you would do this with a covalently bonded solid.

    Chris Phoenix: But most products that we use today do not involve engineered nanoscale operations.

    This seems an extraordinary retreat. Nanotechnology isn’t going to make an impact by allowing us to reproduce the products we have today at lower cost; it’s going to need to allow us to make products with a functionality that is now unattainable. These products – and I’m thinking particularly of applications to nanomedicine and to information and communication technologies – will necessarily involve engineered nanoscale operations.

    Chris Phoenix: For example, a parameterized nanoscale truss design could produce structures which on larger scales had a vast range of strength, elasticity, and energy dissipation. A nanoscale digital switch could be used to build any circuit, and when combined with an actuator and a power source, could emulate a wide range of deformable structures.

    Yes, I agree with this in principle. But we’re coming back to mechanical properties – structural materials, not functional ones. The structural materials we generally use now – wood, steel, brick and concrete – have long since been surpassed by other materials with much superior properties, but we still go on using them. Why? They’re good enough, and the price is right. New structural materials aren’t going to change the world.

    Chris Phoenix: A few designs for photon handling, sensing (much of which can be implemented with mechanics), and so on should be enough to build almost any reasonable macro-scale product we can design.

    Well, I’m not sure I can share this breezy confidence. How is sensing going to be implemented by mechanics? We’ve already conceded that the molecular recognition events that the most sensitive nanoscale sensing operations depend on are going to be difficult or impossible to implement in covalently bonded systems. Designing band-structures – which we need to do to control light/matter interactions – isn’t an issue of ordinary mechanics, but of many-body quantum theory.

    The idea of being able to manipulate atoms in the same way as we manipulate bits is seductive, but ultimately it’s going to prove very limiting. To get the most out of nanotechnology, we’ll need to embrace the complexities of real condensed matter, both hard and soft.

    Artificial life and biomimetic nanotechnology

    Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

    More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

    It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

    If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.

    Making and doing

    Eric Drexler is quoted in Adam Keiper’s report from the NRC nanotechnology workshop in DC as saying:

    “What’s on my wish list: … A clear endorsement of the idea that molecular machine systems that make things … with atomic precision is a natural and important goal for the development of nanoscale technologies … with the focus of that endorsement being the recognition that we can look at biology, and beyond…. It would be good to have more minds, more critical thought, more innovation, applied in those directions.”

    I almost completely agree with this, particularly the bit about looking at biology and beyond. Why only almost?. Because “systems that make things” should only be a small part of the story. We need systems that do things – we need to process energy, process information, and, in the vital area of nanomedicine, interact with the cells that make up humans and their molecular components. This makes a big difference to the materials we choose to work with. Leaving aside, for the moment, the question of whether Drexler’s vision of diamondoid-based nanotechnology can be make to work at all, let’s ask the question, why diamond? It’s easy to see why you would want to use diamond for structural applications, as it is strong and stiff. But its bandgap is too big for optoelectronic applications (like solar cells) and its use in medicine will be limited by the fact that it probably isn’t that biocompatible.

    In the very interesting audio clip that Adam Keiper posts on Howard Lovy’s Nanobot, Drexler goes on to compare the potential of universal, general purpose manufacture with that of general purpose computing. Who would have thought, he asks (I paraphrase from memory here), that we could have one machine that we can use to do spreadsheets, play our music and watch movies on? Who indeed? … but this technology depends on the fact that documents, music and moving pictures can all be represented by 1’s and 0’s. For the idea of general purpose manufacturing to be convincing, one would need to believe that there was an analogous way in which all material things could be represented by a simple low level code. I think this leads to an insoluble dilemma – the need to find simple low level operations drives one to use a minimum number – preferably one – basic mechanosynthesis step. But in limiting ourselves in this way, we make life very difficult for ourselves in trying to achieve the broad range of functions and actions that we are going to want these artefacts for. Material properties are multidimensional, and it’s difficult to believe that one material can meet all our needs.

    Matter is not digital.