The future of nanotechnology; Drexler and Jones exchange letters

The current edition of “Physics World” carries a letter from K. Eric Drexler, written in response to my article in the August edition, “The future of nanotechnology“. There is also a response from me to Drexler’s letter. Since the letters section of Physics World is not published online, I reproduce the letters here. The text here is as the authors wrote them; they’ve been lightly edited to conform with Physics World house style in the printed version.

From Dr K.Eric Drexler to Physics World.

I applaud Physics World for drawing attention to the emerging field of artificial molecular machine systems. Their enormous productive potential is illustrated in biology and nature, where we observe molecular machine systems constructing molecular machinery, electronics, and digital information storage systems at rates measured in billions of tons per year. To understand the future potential of fabrication technologies (the foundation of all physical technology) we must examine the productive potential of artificial molecular machine systems. This field of enquiry has been a focus of my research since (Drexler 1981), which explored directions first suggested by (Feynman 1959).

I was surprised to find that Professor Richard Jones, in describing ���flaws in Drexler���s vision,��� ignores my physical analysis of productive molecular machine systems. He instead criticizes the implied hydrodynamics of an artist���s fantastic conception of a ���nanosubmarine��� ��� a conception not based on my work. It is, I think, important that scientific criticisms address the scientific literature, not artistic fantasies.

Professor Jones then offers a discussion of nanoscale surface forces, thermal motion, and friction that could easily leave readers with the impression that these present dire problems, which he implies have been ignored. But ignoring surface forces or thermal motion in molecular engineering would be like ignoring gravity or air in aeronautics, and physical analysis shows that well-designed molecular bearing interfaces can have friction coefficients far lower than those in conventional machines. These issues (and many others) are analyzed in depth, using the methods of applied physics, in Chapters 3, 5, and 10 of my book Nanosystems (Drexler 1992). Professor Jones fails to cite this work, noting instead only my earlier, popular book written for a general audience.

I agree with Professor Jones regarding the importance of molecular machine systems and the value of learning from and imitating biological systems at this stage of the development of our field. Where we part company is in our judgment of the future potential of the field, of the feasibility of molecular machine systems that are as far from the biological model as a jet aircraft is from a bird, or a telescope is from an eye. I invite your readers to examine the physical analysis that supports this understanding of non-biological productive molecular machine systems, and to disregard the myths that have sprung up around it. (One persistent myth bizarrely equates productive molecular machines with gooey nanomonsters, and then declares these to be impossible contraptions that grab and juggle atoms using fat, sticky fingers.)

There are many interesting research questions to address and technology thresholds to cross before we arrive at advanced artificial molecular machine systems. The sooner we focus on the real physics and engineering issues, building on the published literature, the sooner progress can be made. To focus on artist���s conceptions and myths does a disservice to the community.

K. Eric Drexler, PhD
Molecular Engineering Research Institute

K E Drexler 1981 Molecular Engineering: An approach to the development of general capabilities for molecular manipulation. Proc. Nat. Acad. Sci. (USA) 78:5275���5278
K E Drexler 1992 Nanosystems: Molecular Machinery, Manufacturing, and Computation (New York Wiley/Interscience)
R Feynman 1959 There���s Plenty of Room at the Bottom, in D Gilbert (ed) 1961 Miniaturization (New York Reinhold)

From Dr Richard A.L. Jones to Physics World.

I am pleased that Dr Drexler finds so much to agree with in my article. Our goals are the same; our research aims to understand how to make molecular scale machines and devices. Where we differ is how best to achieve that goal. The article was necessarily very brief in its discussion of surface forces, friction and thermal motion, and my book [1] contains a much fuller discussion, which does explicitly refer to Drexler���s book ���Nanosystems���. No-one who has read ���Nanosystems��� could imagine that Drexler is unaware of these problems, and it was not my intention in the article to imply that he was. Absurd images like the nanosubmarine illustration I used are widely circulated in popular writings about Drexlerian nanotechnology; they well illustrate the point that na?�ve extrapolations of macro-scale engineering to the nanoscale won���t work, but I���m happy to agree that Drexler���s own views are considerably more sophisticated than this. The point I was making was that the approach Drexler describes in detail in Nanosystems, (which he himself describes in the words: ���molecular manufacturing applies the principles of mechanical engineering to chemistry���), works within a paradigm established in macroscopic engineering and seeks to find ways to engineer around the special features of the nanoworld. In contrast to this, the design principles adopted by cell biology turn these special features to advantage and actively exploit them, using concepts such as self-assembly and molecular shape change that have no analogue in macroscopic engineering. Again, Dr Drexler and I are in agreement that in the short term biomimetic nanotechnology will be very fruitful and should be strongly pursued. We differ about the likely long term trajectory of the technology, but here, experiment will decide. Such is the unpredictable nature of the development of technology that I rather suspect that the outcome will surprise us both.

[1] Soft Machines, R.A.L. Jones, OUP (2004)

See-through Science

See-through Science is the title of a pamphlet by James Wilsdon and Rebecca Willis, from the left-of-centre UK think-tank, Demos. It’s a thoughtful reflection on how one ought to engage the public in the development of new technologies, with the emerging debate on nanotechnology taken as its focus. I particularly like the undogmatic tone. The report is very clear on the failings of the old ideas of the “deficit model” of public engagement, which unconvincingly and patronisingly maintained that it is sufficient to educate the public in the wise ways of science to convince them of its benefits. The report also warns against the danger that all debates about new technology get twisted round to focus narrowly on risk assessment. This is particularly timely in view of the way the nanotechnology debate has unfolded, with its excessive focus on the one issue of nanoparticle toxicity.

In terms of recommendations, perhaps for me the most telling point is the way the report highlights the almost complete absence of those parts of industry involved in nanotechnology from this debate. I think this situation needs to change, and very soon.

Then worms ‘ll come and eat thee oop

I had a late night in the prosperous, liberal, Yorkshire town of Ilkley last night, doing a talk and question and answer session on nanotechnology at the local Cafe Philosophique. An engaged and eclectic audience kept the discussion going well past the scheduled finish time. Two points particularly struck me. One recurring question was whether it was ever realistic to imagine that we can relinquish technological developments with negative consequences – “if it can be done, it will be done” was the comment made more than once. I really don’t like this conclusion, but I’m struggling to find convincing arguments against it. A more positive comment concerned the idea of regulation; we are used to thinking of this idea entirely in terms of narrow prohibitions – don’t release these nanoparticles into the environment, for example. But we need to work out how to make regulation a positive force that steers the technology in a desirable direction, rather than simply trying to sit on it.

(Non British readers may need to know that the headline is a line from a rather odd and morbid folk-song called “On Ilkley Moor baht hat”, sung mostly by drunken Yorkshiremen.)

Bad reasons to oppose nanotechnology: part 1

An article in this month’s The Ecologist Magazine, by the ETC group‘s Jim Thomas, makes an argument against nanotechnology that combines ignorance and gullibility with the risk of causing real harm to some of the worlds poorest people. What, he says, will happen to the poor cotton farmers and copper miners of the third world when nanotechnology-based fabric treatments, like those sold by the Nanotex Corp, make cotton obsolete, and carbon nanotubes replace copper as electrical conductors? This argument is so wrong on so many levels it’s difficult to know where to start.

To start with, is there any development economist who would actually argue that basing an third world economy on an extractive industry like copper mining is a good way to get sustained economic development and good living standards for the population as a whole? Likewise, it’s difficult to see that the industrial-scale farming of cotton, with its huge demand for water, can be anything other than an ecological disaster. Zambia is not exactly the richest country in Africa, and Kazakhstan is not a great advertisement for the environmental benefits of cotton growing.

And is the premise even remotely realistic? I wrote below about how novel the nanotex fibre treatments actually are; in fact there are many fibre treatments available now, some carrying a “nano” label, some not, which change handling and water resistance properties of a variety of textiles, both natural and artificial. These are just as likely to increase markets for natural fibres as for artificial ones. And as for nanotubes replacing copper, at a current cost of ��200 a gram this is not going to happen any time soon. What this argument demonstrates is that, unfortunately for a campaigning group, ETC is curiously gullible, with a propensity to mistake corporate press releases and the most uncritical nano-boosterism for reality.

This matters for two reasons. Firstly, on the positive side, nanotechnology really could benefit the environment and the world’s poor. Cheap ways of providing clean water and new sources of renewable energy are realistic possibilities, but they won’t happen automatically and there’ll be real debates about how to set priorities in a way which makes the technology bring benefits to the poor as well as the rich. What these debates are going to need from their participants is some degree of economic and scientific literacy. Secondly, there are some real potential downsides that might emerge from the development of nanotechnology; we need a debate that’s framed in a way that recognises the real risks and doesn’t waste energy and credibility on silly side-issues.

Luckily, there is at least one NGO that is demonstrating a much more sophisticated, subtle and intelligent approach – Greenpeace. The contribution of their chief scientist, Doug Parr, to a recent debate on nanotechnology held by the Royal Society in the light of their recent report, is infinitely more effective at focusing on the real issues.

Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

What is this thing called nanotechnology? Part 2. Nanoscience versus Nanotechnology

In the first part of my attempt to define nanotechnology terms, I discussed definitions of the nanoscale. Now I come to the important and underappreciated distinction between nanoscience and nanotechnology.

Nanoscience describes the convergence of physics, chemistry, materials science and biology to deal with the manipulation and characterisation of matter on the nanoscale.

Many subfields of these disciplines have been dealing with nanoscale phenomena for many years. A very non-exhaustive list of relevant sub-fields, with examples of topics in nanoscience, would include:

  • Colloid science. The characterisation and control of forces between sub-micron particles to control the stability of dispersions.
  • Metallurgy. The control of nanoscale structure to optimise mechanical and other properties – e.g. particle and precipitate hardening.
  • Molecular biology and biophysics. Structural characterisation at atomic resolution first of complex biomolecules, now of assemblies of macromolecules which function as nanomachines.
  • Polymer science. Systems such as block copolymers which self-assemble to form complex nanoscale structures, new architectures like hyperbranched polymers and dendrimers.
  • Semiconductor physics. Nanoscale low dimensional structures like multilayers, wires and dots exploiting quantum effects for new electronic and optoelectronic devices like light emitting diodes and lasers.
  • Supramolecular chemistry. The use of non-covalent interactions to create self-assembled nanoscale structures from molecular components.
  • The distinguishing feature of nanoscience is that increasingly we find methods and techniques from more than one of these existing subfields combined in novel ways.

    Nanotechnology is an engineering discipline which combines methods from nanoscience with the disciplines of economics and the market to create usable and economically viable products.

    Nanoscience and nanotechnology need to be distinguished. Without nanoscience, nanotechnology will not be possible. On the other hand, if you invest money in a nanoscience venture under the impression that it is nanotechnology, you are sure to be disappointed.

    In the next installment, I’ll discuss the various kinds of nanotechnology, from incremental technologies such as shampoos and textile treatments to the more radical visions.

    Will molecular electronics save Moore’s Law?

    Mark Reed, from Yale, was another speaker at a meeting I was at in New Jersey last week. He gave a great talk about the promise and achievement of molecular electronics which I thought was both eloquent and well-judged.

    The context for the talk is provided by the question marks hanging over Moore’s law, the well-known observation that the number of transistors per integrated circuit, and thus available computer power, has grown exponentially since 1965. There are strong indications that we are approaching the time when this dramatic increase, which has done so much to shape the way the world’s economy has changed recently, is coming to an end.

    The semiconductor industry is approaching a “red brick wall”. This phrase comes from the International Technology Roadmap for Semiconductors, an industry consensus document which sets out the technical barriers that need to overcome in order to maintain the projected growth in computer power. In the technical tables, cells which describe technical problems with no known solution are coloured red, and by 2007-8 these red cells proliferate to the point of becoming continuous – hence the red brick wall.

    A more graphic illustration of the problems the industry faces was provided in a plot that Reed showed of surface power density as a function of time. This rather entertaining plot showed that current devices have long surpassed the areal power density of a hot-plate, are not far away from the values for a nuclear reactor, and somewhere around the middle of the next decade will surpass the surface of the sun. Now I find the warm glow from my Powerbook quite comforting on my lap but carrying a small star around with me is going to prove limiting.

    So the idea that molecular electronics might help overcome these difficulties is quite compelling. In this approach, individual molecules are used as the components of integrated circuits, as transistors or diodes, for example. This provides the ultimate in miniaturisation.

    The good news is that (despite the Sch??n debacle) there are some exciting and solid results in the field. The simplest devices, like diodes, have two terminals, and there is no doubt that single molecule two-terminal devices have been convincingly demonstrated in the lab. Three terminal devices, like transistors, seem to be vital to make useful integrated circuits, though, and there progress has been slower. It’s difficult enough to wire up two connections to a single molecule, but gluing a third one on is even harder. This feat has been achieved for carbon nanotubes.

    What’s the downside? The carbon nanotube transistors have a nasty and underpublicised secret – the connections between the nanotubes and the electrodes are not, in the jargon, Ohmic – that means that electrons have to be given an extra push to get them from the electrode into the nanotube. This makes it difficult to scale them down to the small sizes that would be needed to make them competitive with silicon. And the single molecule devices have the nasty feature that every one is different. Conventional microelectronics works because every one of the tens of millions of transistors on something like a Pentium are absolutely identical. If the characteristics of each of the components were to randomly vary the whole way we currently do computing would need to be rethought.

    So it’s clear to me that molecular electronics remains a fascinating and potentially valuable research field, but it’s not going to deliver results in time to prevent a slow-down in the growth of computer power that’s going to begin in earnest towards the end of this decade. That’s going to have dramatic and far-reaching effects on the world economy, and it’s coming quite soon.

    Training the nanotechnologists of the future

    It’s that time of year when academic corridors are brightened by the influx of students, new and returning. I’m particularly pleased to see here at Sheffield the new intake for the Masters course in Nanoscale Science and Technology that we run jointly with the University of Leeds.

    We’ve got 29 students starting this year; it’s the fourth year that the course has been running and over that time we’ve seen a steady growth in demand. I hope that reflects an appreciation of our approach to teaching the subject.

    My view is that to work effectively in nanotechnology you need two things, First comes the in depth knowledge and problem-solving ability you get from studying a traditional discpline, whether that’s a pure science, like physics and chemistry, or an applied science, like materials science, chemical engineering or electrical engineering. But then you need to learn the languages of many other disciplines, because no physicist or chemist, no matter how talented at their own subject, will be able to make much of a contribution in this area unless they are able to collaborate effectively with people with very different sets of skills. That’s why to teach our course we’ve assembled a team from many different departments and backgrounds; physicists, chemists, materials scientists, electrical engineers and molecular biologists are all represented.

    Of course, the nature of nanotechnology is such that there’s no universally accepted curriculum, no huge textbook of the kind that beginning physicists and chemists are used to. The speed of development of the subject is such that we’ve got to make much more use of the primary research literature than one would for, say, a Masters course in physics. And because nanotechnology should be about practise and commercialisation as well as theory we also refer to the patent literature, something that’s, I think, pretty uncommon in academia.

    In terms of choice of subjects, we’re trying to find a balance between the hard nanotechnology of lithography and molecular beam epitaxy and the soft nanotechnology of self-assembly and bionanotechnology. The book of the course, “Nanoscale Science and Technology”, edited by my colleagues Rob Kelsall, Ian Hamley and Mark Geoghegan, will be published in January next year.

    What is this thing called nanotechnology? Part 1. The Nano-scale.

    Nanotechnology, of course, isn’t a single thing at all. That’s why debates about the subject often descend into mutual incomprehension, as different people use the same word to different things, whether it’s business types talking about fabric treatments, scientists talking about new microscopes, or posthumanists and futurists talking about universal assemblers. I’ve attempted to break the term up a little and separate out the different meanings of the word. I’ll soon put these nanotechology definitions on my website, but I’m going to try out the draft definitions here first. First, the all-important issue of scale.

    Nanotechnologies get their name from a unit of length, the nanometer. A nanometer is one billionth of a metre, but let’s try to put this in context. We could call our everyday world the macroscale. This is the world in which we can manipulate things with our bare hands, and in rough terms it covers about a factor of a thousand. The biggest things I can move about are about half a meter big (if they’re not too dense), and my clumsy fingers can’t do very much with things smaller than half a millimeter.

    We’ve long had the tools to extend the range of human abilities to manipulate matter on smaller scales than this. Most important is the light microscope, which has opened up a new realm of matter – the microscale. Like the macroscale, this also embraces roughly another factor of a thousand in length scales. At the upper end, objects half a millimeter or so in size provide the link with the macroscale; still visible to the naked eye, handling them becomes much more convenient with the help of a simple microscope or even a magnifying glass. At the lower end, the wavelength of light itself, around half a micrometer, gives a lower limit on the size of objects which can be discriminated even with the most sophisticated laboratory light microscope.

    Below the microscale is the nanoscale. If we take as the upper limit of the nanoscale the half-micron or so that represents the smallest object that can be resolved in a light microscope, then another factor of one thousand takes us to half a nanometer. This is a very natural lower limit for the nanoscale, because it is a typical size for a small molecule. The nanoscale domain, then, in which nanotechnology operates, is one in which individual molecules are the building blocks of useful structures and devices.

    These definitions are by the nature arbitrary, and it’s not worth spending a lot of time debating precise limits on length scales. Some definitions – the US National Nanotechnology Initiative provides one example – uses a smaller upper limit of 100 nm. There isn’t really any fundamental reason for choosing this number over any other one, except that this definition carries the authority of President Clinton, who of course is famous for the precision of his use of language. Some other definitions attempt to attach some more precise physical significance to this upper length limit on nanotechnology, by appealing to some length at which finite size effects, usually of quantum origin, become important. This is superficially appealing but unattractive on closer examination, because the relevant length-scale on which these finite size effects become important differs substantially according to the phenomenon being looked at. And this line of reasoning leads to an absurd, but commonly held view, that the nanoscale is simply the length-scale on which quantum effects become important. This is a very unhelpful definition when one thinks about it for longer than a second or two; there are plenty of macroscopic phenomena that you can’t understand without invoking quantum mechanics. Magnetism and the electronic behaviour of semiconductors are two everyday examples. And equally, many interesting nanoscale phenomena, notably virtually all of cell biology, don’t really involve quantum mechanical effects in any direct way.

    So I’m going to stick to these twin definitions – it’s the nanoscale if it’s too small to resolve in an ordinary light microscope, and if it’s bigger than your typical small molecule.

    None but the brave deserve the (nano)fair

    I’m in St Gallen, Switzerland, in the unfamiliar environment (for an academic) of a nanotechnology trade fair. The commercialisation arm of our polymer research activities in the University of Sheffield, the Polymer Centre, is one of the 14 UK companies and organisations that are exhibiting as part of the official UK government stall at Nanofair 2004.

    It’s interesting to see who’s exhibiting. The majority of exhibitors are equipment manufacturers, which very much supports one conventional wisdom about nanotechology as a business, which is that the first people to make money from it will be the suppliers of the tools of the trade. Perhaps the second category are those countries and regions who are trying to promote themselves as desirable locations for businesses to relocate to. Companies that actually have nanotechnology products for actual consumer markets are very much in the minority, though there are certainly a few interesting ones there.

    Alternative photovoltaics (dye-sensitised and/or polymer-based) are making a strong showing, helped by a lecture from Alan Heeger, largely about Konarka. This must be one of the major areas where incremental nanotechnology has the potential to make a disruptive change to the economy. A less predictable, but fascinating stand, for me, was from a Swiss plastics injection moulding company called Weidmann. Injection moulding is the familiar (and very cheap) way in which many plastic items, like the little plastic toys that come in cereal boxes, are made. Weidmann are demonstrating an injection moulded part in an ordinary commodity polymer with a controlled surface topography at the level of 5-10 nanometers. To me it is stunning that such a cheap and common processing technology can be adapted (certainly with some very clever engineering) to produce nanostructured parts in this way. Early applications will be to parts with optical effects like holograms directly printed in, and more immediately microfluidic reactors for diagnostics and testing.

    The UK has a big presence here, and our stand has some very interesting exhibitors on it. I’ll single out Nanomagnetics which uses a naturally occurring protein to template the manufacture of magnetic nanoparticles with very precisely controlled sizes. These nanoparticles are then used either for high density data storage applications or for water purification, as removable forward osmosis agents. This is a great application of exploiting biological nanotechnology that very much is in accord with the philosophy outlined in my book Soft Machines; I should declare an interest in that I’ve just joined the scientific advisory board of this company.

    The UK government is certainly working hard to promote the interests of its nascent nanotechnology industry. Our stall is full of well-dressed and suave diplomats and civil servants. However, one of the small business exhibitors was muttering a little that if only they were willing to spend the money directly supporting the companies with no-strings contracts, as the US government is doing with companies like Nanosys, then maybe the UK’s prospects would be even brighter.