Making and doing

Eric Drexler is quoted in Adam Keiper’s report from the NRC nanotechnology workshop in DC as saying:

“What’s on my wish list: … A clear endorsement of the idea that molecular machine systems that make things … with atomic precision is a natural and important goal for the development of nanoscale technologies … with the focus of that endorsement being the recognition that we can look at biology, and beyond…. It would be good to have more minds, more critical thought, more innovation, applied in those directions.”

I almost completely agree with this, particularly the bit about looking at biology and beyond. Why only almost?. Because “systems that make things” should only be a small part of the story. We need systems that do things – we need to process energy, process information, and, in the vital area of nanomedicine, interact with the cells that make up humans and their molecular components. This makes a big difference to the materials we choose to work with. Leaving aside, for the moment, the question of whether Drexler’s vision of diamondoid-based nanotechnology can be make to work at all, let’s ask the question, why diamond? It’s easy to see why you would want to use diamond for structural applications, as it is strong and stiff. But its bandgap is too big for optoelectronic applications (like solar cells) and its use in medicine will be limited by the fact that it probably isn’t that biocompatible.

In the very interesting audio clip that Adam Keiper posts on Howard Lovy’s Nanobot, Drexler goes on to compare the potential of universal, general purpose manufacture with that of general purpose computing. Who would have thought, he asks (I paraphrase from memory here), that we could have one machine that we can use to do spreadsheets, play our music and watch movies on? Who indeed? … but this technology depends on the fact that documents, music and moving pictures can all be represented by 1’s and 0’s. For the idea of general purpose manufacturing to be convincing, one would need to believe that there was an analogous way in which all material things could be represented by a simple low level code. I think this leads to an insoluble dilemma – the need to find simple low level operations drives one to use a minimum number – preferably one – basic mechanosynthesis step. But in limiting ourselves in this way, we make life very difficult for ourselves in trying to achieve the broad range of functions and actions that we are going to want these artefacts for. Material properties are multidimensional, and it’s difficult to believe that one material can meet all our needs.

Matter is not digital.

The irresistible rise of nano-gizmology

What would happen if nanotechnology suddenly went out of fashion in the academic world, all the big nano-funding initiatives dried up, and putting the nano word in grant applications doomed them to certain failure? Would all the scientists who currently label themselves nanoscientists just go back to being chemists and physicists as before? This interesting question was posed to me on Monday during a fascinating afternoon seminar and discussion with social scientists from the Institute for Environment, Philosophy and Public Policy at Lancaster University.

My first reaction was to say that nothing would change. Scientists can be a cynical bunch when it comes to funding, and it’s tempting to assume that they would just relabel their work yet again to conform with whatever the new fashion was, and carry on just as before. But on reflection my answer was that the rise of nanoscience and nanotechnology as a label in academic science has been accompanied by two real and lasting cultural changes. The first is so well-rehearsed that it’s a cliché to say it, but it is nonetheless true – nanoscientists really have got used to interdisciplinary working in a way that was very rare in academia twenty years ago (of course, it has always been the rule in industry). The second change is less obvious, though I think I first noticed it as a marked change six or seven years ago. This was a shift in emphasis away from testing theories and characterising materials towards making widgets or gizmos – things that, although usually still far away from being a real, viable product, did something or produced some functional effect. More than any use of the label “nano”, this seems to me to be a lasting change in the way scientists judge the value of their own and other peoples’s work; it’s certainly very much reflected in the editorial policies of the glamour journals like Nature and Science. Some will mourn the eclipse of the values of pure science values, while others will anticipate a more direct economic return on our societies’s investments in science as a result, but it remains to be seen what the overall outcome of this shift will be.

Nanotech at Hewlett-Packard

There’s a nice piece in Slate by Paul Boutin reporting on his trip round the Hewlett-Packard labs in Palo Alto. The opening stresses the evolutionary, short term, character of the research going on there, stressing that these projects only get funded if they are going to make a fast return for the company, usually within five years. The first projects he mentions are about RFID (radio frequency identification), and these are discussed in terms of Walmart, supply chains and keeping track of your pallets. I can relate to this because my wife used to be a production planner. She used to wake up in the night worrying about whether there were enough plastic overcaps in the warehouse to pack the next week’s production, but she knew that the only way to find out for sure, despite all their smart SAP systems, was to walk down to the warehouse and look. But despite these mundane immediate applications it’s the technologies that are going to underlie RFID that also have such uncomfortable implications for a universal surveillance society.

The article moves on to talk about HP’s widely reported recent development of crossbar latches as a key component for molecular electronic logic circuits (see for example this BBC report, complete with a good commentary from Soft Machines’s frequent visitor, Philip Moriarty). The author rightly highlights the need to develop new, defect tolerant computer architectures if these developments in molecular electronics are to be converted into useful products. This nicely illustrates the point I made below, that in nanotechnology you may well need to develop systems architectures that accommodate the physical realities of the nanoscale, rather than designing the architecture first and hoping that you’ll be able to find low-level operations that will suit your preconceived notions .

The mechanosynthesis debate

Now that the 130 or so people who have downloaded the Moriarty/Phoenix debate about mechanosynthesis so far have had a chance to digest it, here are some thoughts that occur to me on re-reading it. This is certainly not a summary of the debate; rather these are a just a few of the many issues that emerge that I think are important.

On definitions. A lot of the debate revolves around questions of how one actually defines mechanosynthesis. There’s an important general point here – one can have both narrow definitions and broad definitions, and there’s a clear role for both. For example, I have been very concerned in everything I write to distinguish between the broad concept of radical nanotechnology, and the specific realisation of a radical nanotechnology that is proposed by Drexler in Nanosystems. But we need to be careful not to imagine that a finding that supports a broad concept necessarily also validates the narrower definition. So, to use an example that I think is very important, the existence of cell biology is compelling evidence that a radical nanotechnology is possible, but it doesn’t provide any evidence that the Drexlerian version of the vision is workable. Philip’s insistence on a precise definition of mechanosynthesis in distinction from the wider class of single molecule manipulation experiments stems from his strongly held view that the latter don’t yet provide enough evidence for the former. Chris, on the other hand, is in favour of broader definitions, on the grounds that if the narrowly defined approach doesn’t work, then one can try something else. This is fair enough if one is prepared to be led to wherever the experiments take you, but I don’t think it’s consistent with having a very closely defined goal like the Nanosystems vision of diamondoid based MNT. If you let the science dictate where you go (and I don’t think you have any choice but to do this), your path will probably take you somewhere interesting and useful, but it’s probably not going to be the destination you set out towards.

On the need for low-level detail. The debate makes very clear the distinction between the high-level systems approach exemplified by Nanosystems and by the Phoenix nanofactory paper, and the need to work out the details at the “machine language” level. “Black-boxing” the low-level complications can only take you so far; at some point one needs to work out what the elementary “machine language” operations are going to be, or even whether they are possible at all. Moreover, the nature of these elementary operations can’t always be divorced from the higher level architecture. A good example comes from the operation of genetics, where the details of the interaction between DNA, RNA and proteins means that the distinction between hardware and software that we are used to can’t be sustained.

On the role of background knowledge from nanoscience. A widely held view in the MNT community is that very little research has been done in pursuit of the Drexlerian project since the publication of Nanosystems. This is certainly true in the sense that science funding bodies haven’t supported an overtly Drexlerian research project; but it neglects the huge amount of work in nanoscience that has a direct bearing, in detail, on the proposals in Nanosystems and related work. This varies from the centrally relevant work done by groups (including the Nottingham group, and a number of other groups around the world) which are actively developing the manipulation of single molecules by scanning probe techniques, to the important background knowledge accumulated by very many groups round the world in areas such as surface and cluster physics and chemical vapour deposition. This (predominantly experimental) work has greatly clarified how the world at the nanoscale works, and it should go without saying that theoretical proposals that aren’t consistent with the understanding gained in this enterprise aren’t worth pursuing. Commentators from the MNT community are scornful, with some justification, of nanoscientists who make pronouncements about the viability of the Drexlerian vision of nanotechnology without having acquainted themselves with the relevant literature, for example by reading Nanosystems. But this obligation to read the literature goes both ways.

I think the debate has moved us further forward. I think it is clear that the Freitas proposal that sparked the discussion off does have serious problems that will probably prevent its implementation in its original form. But the fact that a proposal concrete enough to sustain this detailed level of criticism has been presented is itself immensely valuable and positive, and it will be interesting to see what emerges when the proposal is refined and further scrutinised. It is also clear that, whatever the ultimate viability of this mechanosynthetic route to full MNT turns out to be (and I see no grounds to revise my sceptical position), there’s a lot of serious science to be done, and claims of a very short timeline to MNT are simply not credible.

Is mechanosynthesis feasible? The debate continues.

In my post of December 16th, Is mechanosynthesis feasible? The debate moves up a gear, I published a letter from Philip Moriarty, a nanoscientist from Nottingham University, which offered a detailed critique of a scheme for achieving the first steps towards the mechanosynthesis of diamondoid nanostructures, due to Robert Freitas. The Center for Responsible Nanotechnology‘s Chris Phoenix began a correspondence with Philip responding to the critique. Chris Phoenix and Philip Moriarty have given permission for the whole correspondence to be published here. It is released in a mostly unedited form; however the originals contained some quotations from Dr K. Eric Drexler which he did not wish to be published; these have therefore been removed.

The total correspondence is long and detailed, and amounts to 56 pages in total. It’s broken up into three PDF documents:
Part I
Part II
Part III.

I’m going to refrain from adding any comment of my own for the moment, so readers can form their own judgements, though I’ll probably make some observations on the correspondence in a few days time.

The correspondence between Philip Moriarty and Chris Phoenix, for the time being, ends here. However Philip Moriarty has asked me to include this statement, which he has agreed with Robert Freitas:

“Freitas and Moriarty have recently agreed to continue discussions related to the fundamental science underlying mechanosynthesis and the experimental implementation of the process. These discussions will be carried out in a spirit of collaboration rather than as a debate and, therefore, will not be published on the web. In the event that this collaborative effort produces results that impact (either positively or negatively) on the future of mechanosynthesis, those results will be submitted for publication in a peer-reviewed journal.”

Converging technologies in Europe and the USA

Last Thursday saw a meeting in London to introduce to the UK a report that came out last summer on the convergence of nanotechnology, biotechnology, information technology and neuroscience. Converging technologies for a diverse Europe can essentially be thought of as the European answer to the 2002 report from the USA, Converging Technologies for Improving Human Performance. The speaker line-up, besides me, included social scientists, futurologists, an arms control expert and an official from the European Commission. What was striking to me was how much this debate was framed in terms Europe trying to position itself somewhat apart from the USA, though perhaps this isn’t surprising in view of the broader flow of international politics at the moment.

It’s almost a clich?� that public opinion is very different on the two continents, with the USA being much more uninhibited in its welcoming of new technology than the more technophobic Europeans. George Gaskell, a sociologist from the London School of Economics, presented survey data that at first seems to confirm this view. In his 2002 surveys, he found that while 50% of people in the USA were sure that nanotechnology would be positive in its outcome, only 29% of Europeans were so optimistic. But the picture isn’t as simple as it first appears; the figures for the proportion who thought that nanotechnology would make things worse were not actually that different – 4% in the USA compared to 6% in Europe. The Europeans were simply taking the attitude that they didn’t know enough to judge. The absence of any across-the-board distrust of technology is shown by a comparison of attitudes to three key technologies – nuclear energy, computers and information technology and biotechnology. The data showed almost overwhelming opposition to nuclear power, equally overwhelming enthusiasm for computers and communication technology, and a mixed picture for biotech. The key issues for acceptance prove not to be any deep enthusiasm or distrust for technology in general; it’s simply a balance of the benefits and risks together with a judgement on how much the governance and regulation of the technology can be trusted.

Where there is a big difference between Europe and the USA is in the importance of the military in driving research. J?�rgen Altmann, a physicist turned arms-control expert from The University of Dortmund, is very worried about the military applications of nanotechnology, and his worries are nicely summarised in this pdf handout. His view is that the USA is currently undertaking an arms race against itself, wasting resources that could otherwise be used both to boost economic competitiveness and to counter the real threat that both the USA and Europe face by more appropriate and low-tech means. Others, of course, will differ on the nature of the threat and the best way to counter it.

The balance between civil and military research and development was also highlighted by Elie Faroult, from the Research Directorate of the European Commission, who pointed out with some glee that the EU was now considerably ahead of the USA in investment in most civil research, and that this trend is accelerating as the USA squeezes spending on non-military science. For him, this gave Europe the opportunity to develop a distinctive set of research goals which emphasised social coherence and environmental sustainability as well as economic competitiveness. But having taken the obligatory side-swipe at the USA he finished by saying that of course, looking to the future, it wasn’t the USA that Europe was in competition with. The real competitor for both the USA and Europe was China.


Nature has some very elegant and efficient solutions to the problems of making nanoscale structures, exploiting the self-assembling properties of information-containing molecules like proteins to great effect. A very promising approach to nanotechnology is to use what biology gives us to make useful nanoscale products and devices. I spent Monday visiting a nanotechnology company that is doing just that. Nanomagnetics is a Bristol based company (I should disclose an interest here, in that I’ve just been appointed to their Science Advisory Board) which exploits the remarkable self-assembled structure of the iron-storage protein ferritin to make nanoscale magnetic particles with uses in data storage, water purification and medicine.


The illustration shows the ferritin structure; 24 individual identical protein molecules come together to form a hollow spherical shell 12 nm in diameter. The purpose of the molecule is to store iron until it is needed; iron ions enter through the pores and are kept inside the shell – given the tendency of iron to form a highly insoluble oxide, if we didn’t have this mechanism for storing the stuff our insides would literally rust up. Nanomagnetics is able to use the hollow shell that ferritin provides as a nanoscale chemical reactor, producing nanoparticles of magnetic iron oxide or other metals of great uniformity in size, and with a protein coat that both stops them sticking together and makes them biocompatible.

One simple, but rather neat, application of these particles is in water purification, in a process called forward osmosis. If you filled a bag made of a nanoporous membrane with sugar syrup, and immersed the bag in dirty water, water would be pulled through the membrane by the osmotic pressure exerted by the concentrated sugar solution. Microbes and contaminating molecules wouldn’t be able to get through the membrane, if its pores are small enough, and you end up with clean sugar solution. There’s a small company from Oregon, USA, HTI , which has commercialised just such a product. Essentially, it produces something like a sports drink from dirty or brackish water, and as such it’s started to prove its value for the military and in disaster relief situations. But what happens if you want to produce not sugar solution, but clean water? If you replace the sugar by magnetic nanoparticles then you can sweep the particles away with a magnetic field and then use them again to produce another batch of water, producing clean water from simple equipment with only a small cost in energy.

The illustration of ferritin is taken from the Protein Database’s Molecule of the Month feature. The drawing is by David S. Goodsell, based on the structure determined by Lawson et al., Nature 349 pp. 541 (1991).

Competitive Consumption

Partisans of molecular nanotechnology keep coming back to the theme of the devastation that they say will be caused to the world’s economic systems when it becomes possible to manufacture anything at no cost. Surely, they say, when goods cost nothing to make, then the money economy must wither away? I don’t accept the premise of this argument, but even if I did I think it is based on a misunderstanding of how economics works. The laws of economics, inasmuch as anything in that discipline can be described as a law, are really observations about human nature, and as such are not likely to be overturned on the basis of a mere technological advance. The key fallacy in this way of thinking is very succinctly put in an excellent book I’ve just finished: A nation of rebels: why counterculture became consumer culture, by Joseph Heath and Andrew Potter.

This book is mainly an entertaining polemic against the counterculture and the anti-globalisation movement. What’s relevant to us here is its gleeful demolition of the idea of postscarcity economics, as proposed by Herbert Marcuse and Murray Bookchin. This is the idea that once machines were able to take care of all our material needs and wants, we would be able to form a society based not on the demands of economic production, but on fellowship and love. It’s very easy to see the connection between this and the arguments made by the proponents of molecular nanotechnology.

The key concept in understanding what’s wrong with these ideas is the notion of a “positional good”. Positional goods get their value from the fact that not everyone can have them; people pay lots of money for an expensive and rare sports car like an Aston Martin, not simply because it is a nice piece of engineering, but explicitly because possession of one signals, in the view of the purchaser, something about their exalted status in society. The whole aim of much advertising and brand building is to increase the value of artefacts which often cost very little to make, by associating them with status messages of this kind. Very few people are immune to this, unless they live in cabins in the wilderness; for most of the middle class majorities of rich countries their biggest expenditure is on a house to live in, which by virtue of the importance of location and neighbourhood is an archetypal positional good.

When one realises how important positional goods are in market economies, the fallacy of the idea that molecular manufacturing would cause the end of the money economy becomes clear. In the words of Heath and Potter:

“What eventually led to the undoing of these views was the failure to appreciate the competitive nature of our consumption and the significance of positional goods. Houses in good neighborhoods, tasteful furniture, fast cars, stylish restaurant and cool clothes are all intrinsically scarce. We cannot manufacture more of them, because their value is based on the distinction they provide to consumers. The idea of overcoming scarcity through increased production is incoherent; in our society, scarcity is a social, not a material, phenomenon.”


I’m grateful to Tim Harper for some kind words about me in his column on Giving his roundup of how nanotechnology fared last year, he notes that 2004 ” was also the year that the tricky issue of the Drexlerian idea of molecular manufacturing – the version popularised by the Foresight Institute – finally began to be addressed in a scientific manner”, and he mentions both this blog and my book Soft Machines in connection with this. But, as he goes on to say, “there is much work to be done, however, to build trust between the scientific and molecular nanotechnology communities”.

To build trust, you need understanding. It’s probably true that many in the scientific community have not made the effort to understand the point of view of the molecular nanotechnology community. But equally, I think that in that community that there is a very widespread lack of understanding about how science works. I don’t mean this in the sense that they don’t understand the scientific method or basic scientific results; it’s the sociological aspects of science as a human enterprise I’m talking about here. You need an understanding of how science as a collective effort selects problems and makes progress in order to be able to predict and understand the ways in which nanoscience will turn into nanotechnology.

A simple example of the sort of misconception that results is the widespread view in the molecular nanotechnology community that the high profile scepticism of figures like Richard Smalley is all that stands in the way of progress towards their goal, because scientists are discouraged from pursuing these lines of enquiry for fear of their career. The truth is absolutely the opposite; there would be no surer way for a young scientist to become rich and famous than by proving Smalley wrong, and you can be confident that if someone with the right experience and the right equipment could think of a way of making a big step towards demonstrating mechanosynthesis, they would be doing it now. And if they were successful, they’d probably find space for a few kind words about Drexler in the speech they gave as they accepted their Nobel prize…

Nanotechnology and universal surveillance

What potential impact of nanotechnology on society should we worry about most? The headlines are being made by the possibility that nanoparticles are especially toxic. This is a real concern, but the problem is relatively bounded and the solutions aren’t difficult to put in place (even if those solutions are going to be bad news for laboratory rats). The primal fear is of loss of control of a technology so powerful that it could lead to the extinction of all life – the problem of grey goo. But as the hubris and lack of realism implicit in the assumption that we will, any time soon, be able to engineer what amounts to a new and superior form of life becomes more apparent, this fear should lose its force.

My major worry is in the way we’ll deal with a world in which computing and communication power is so cheap that every object and artefact can have the capability to sense its environment and interact with its neighbours. RFID and smart dust give us a good idea of which way this technology is heading, and developments in evolutionary nanotechnology are sure both to greatly increase the capability and dramatically decrease the cost of such devices.

Worries about the way this is heading are eloquently stated in an interesting new essay on nanotechnology by one of the grandest figures of US academic nanoscience, George Whitesides (the essay is in the current edition of the new Wiley nanotechnology journal, Small, but it probably needs a subscription for full text access).

“In my opinion, the most serious risk of nanotechnology comes, not from hypothetical revolutionary materials or systems, but from the uses of evolutionary nanotechnologies that are already developing rapidly ….
���Universal surveillance������the observation of everyone and everything, in real time, everywhere; a concept suggested by those most concerned with terrorism���is not a technology that I would wish to see cloak a free society, no matter how protectively intended.”