Right and wrong lessons from biology

The most compelling argument for the possibility of a radical nanotechnology, with functional devices and machines operating at the nano-level, is the existence of cell biology. But one can take different lessons from this. Drexler argued that we should expect to be able to do much better than cell biology if we applied the lessons of macroscale engineering, using mechanical engineering paradigms and hard materials. My argument, though, is that this fails to take into account the different physics of the nanoscale, and that evolution has optimised biology’s “soft machines” for this environment. This essay, first published in the journal Nature Nanotechnology (subscription required, vol 1, pp 85 – 86 (2006)), reflects on this issue.

Nanotechnology hasn’t yet acquired a strong disciplinary identity, and as a result it is claimed by many classical disciplines. “Nanotechnology is just chemistry”, one sometimes hears, while physicists like to think that only they have the tools to understand the strange and counterintuitive behaviour of matter at the nanoscale. But biologists have perhaps the most reason to be smug – in the words of MIT’s Tom Knight “biology is the nanotechnology that works”.

The sophisticated and intricate machinery of cell biology certainly gives us a compelling existence proof that complex machines on the nanoscale are possible. But, having accepted that biology proves that one form of nanotechnology is possible, what further lessons should be learned? There are two extreme positions, and presumably a truth that lies somewhere in between.

The engineers’ view, if I can put it that way, is that nature shows what can be achieved with random design methods and a palette of unsuitable materials allocated by the accidents of history. If you take this point of view, it seems obvious that it should be fairly straightforward to make nanoscale machines whose performance vastly exceeds that of biology, by making rational choices of materials, rather than making do with what the accidents of evolution have provided, and by using the design principles we’ve learnt in macroscopic engineering.

The opposite view stresses that evolution is an extremely effective way of searching parameter space, and that in consequence is that we should assume that biological design solutions are likely to be close to optimal for the environment for which they’ve evolved. Where these design solutions seem odd from our point of view, their unfamiliarity is to be ascribed to the different ways in which physics works at the nanoscale. At its most extreme, this view regards biological nanotechnology, not just as the existence proof for nanotechnology, but as an upper limit on its capabilities.

So what, then, are the right lessons for nanotechnology to learn from biology? The design principles that biology uses most effectively are those that exploit the special features of physics at the nanoscale in an environment of liquid water. These include some highly effective uses of self-assembly, using the hydrophobic interaction, and the principle of macromolecular shape change that underlies allostery, used for both for mechanical transduction and for sensing and computing. Self-assembly, of course, is well known both in the laboratory and in industrial processes like soap-making, but synthetic examples remain very crude compared to the intricacy of protein folding. For industrial applications, biological nanotechnology offers inspiration in the area of green chemistry – promising environmentally benign processing routes to make complex, nanostructured materials based on water as a solvent and using low operating temperatures. The use of templating strategies and precursor routes widens the scope of these approaches to include final products which are insoluble in water.

But even the most enthusiastic proponents of the biological approach to nanotechnology must concede that there are branches of nanoscale engineering that biology does not seem to exploit very fully. There are few examples of the use of coherent electron transport over distances greater than a few nanometers. Some transmembrane processes, particularly those involved in photosynthesis, do exploit electron transfer down finely engineered cascades of molecules. But until the recent discovery of electron conduction in bacterial pili, longer ranged electrical effects in biology seem to be dominated by ionic rather than electronic transport. Speculations that coherent quantum states in microtubules underlie consciousness are not mainstream, to say the least, so a physicist who insists on the central role of quantum effects in nanotechnology finds biology somewhat barren.

It’s clear that there is more than one way to apply the lessons of biology to nanotechnology. The most direct route is that of bionanotechnology, in which the components of living systems are removed from their biological context and put to work in hybrid environments. Many examples of this approach (which NYU’s Ned Seeman has memorably called biokleptic nanotechnology) are now in the literature, using biological nanodevices such as molecular motors or photosynthetic complexes. In truth, the newly emerging field of synthetic biology, in which functionality is added back in a modular way to a stripped down host organism, is applying this philosophy at the level of systems rather than devices.

This kind of synthetic biology is informed by what’s essentially an engineering sensibility – it is sufficient to get the system to work in a predictable and controllable way. Some physicists, though, might want to go further, taking inspiration from Richard Feynman’s slogan “What I cannot create I do not understand”. Will it be possible to have a biomimetic nanotechnology, in which the design philosophy of cell biology is applied to the creation of entirely synthetic components? Such an approach will be formidably difficult, requiring substantial advances both in the synthetic chemistry needed to create macromolecules with precisely specified architectures, and in the theory that will allow one to design molecular architectures that will yield the structure and function one needs. But it may have advantages, particularly in broadening the range of environmental conditions in which nanosystems can operate.

The right lessons for nanotechnology to learn from biology might not always be the obvious ones, but there’s no doubting their importance. Can the traffic ever go the other way – will there be lessons for biology to learn from nanotechnology? It seems inevitable that the enterprise of doing engineering with nanoscale biological components must lead to a deeper understanding of molecular biophysics. I wonder, though, whether there might not be some deeper consequences. What separates the two extreme positions on the relevance of biology to nanotechnology is a difference in opinion on the issue of the degree to which our biology is optimal, and whether there could be other, fundamentally different kinds of biology, possibly optimised for a different set of environmental parameters. It may well be a vain expectation to imagine that a wholly synthetic nanotechnology could ever match the performance of cell biology, but even considering the possibility represents a valuable broadening of our horizons.

Reactions to “Rupturing the Nanotech Rapture”

It’s a couple of weeks since my article in the current edition of IEEE Spectrum magazine (the Singularity Special) – “Rupturing the Nanotech Rapture” – appeared, and it’s generated a certain amount of discussion on the nanotech blogs. Dexter Johnson, on Tech Talk (IEEE Spectrum’s own blog) observes that “In all it’s a deftly diplomatic piece, at once dispelling some of the myths surrounding the timeline for molecular nanotechnology contributing to the Singularity while both complementing and urging on the early pioneers of its concept.” I’m very happy with this characterisation.

On Nanodot, the blog of the Foresight Institute, the piece prompts the question: “Which way(s) to advanced nanotechnology?” The answer is diplomatic: “Certainly the “soft machines” approach to nanotechnology holds great promise for the near term, while the diamondoid mechanosynthesis approach is only in the very early stages of computer simulation.” This certainly captures the relatively slow progress to date of diamondoid mechanosynthesis, and attracts the scorn of nanobusinessman Tim Harper, who writes on TNTlog “Perhaps more roadkill than tortoise to nanoscience’s hare is diamondoid mechanosynthesis, beloved of the Drexlerians, which doesn’t seem to have made any progress whatsoever, and increasingly resembles a cross between a south sea cargo cult and Waiting for Godot.”

Over on the Center for Responsible Nanotechnology, the Nanodot piece prompts the question “Which way from here?” (though CRN doesn’t actually mention me or the Spectrum piece directly). The question isn’t answered – “CRN also remains agnostic about whether a top-down or bottom-up angle or a soft/wet or hard/dry approach will be more successful.” This doesn’t seem entirely consistent with their previous published positions but there we are.

The longest response comes from Michael Anissimov’s blog, Accelerating Future. This runs to several pages, and deserves a considered response, which is coming soon.

Coming soon (or not)

Are we imminently facing The Singularity? This is the hypothesised moment of technological transcendence, in the concept introduced by mathematician and science fiction writer Vernor Vinge, when accelerating technological change leads to a recursively self-improving artificial intelligence of superhuman capabilities, with literally unknowable and ineffable consequences. The most vocal proponent of this eschatology is Ray Kurzweil, whose promotion of the idea takes to the big screen this year with the forthcoming release of the film The Singularity is Near.

Kurzweil describes the run-up to The Singularity: “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it … Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity … and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.” Where will we go from here? To The Singularity – “We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it”. This will take place, according to Kurzweil, in the year 2045.

The film is to be a fast-paced documentary, but to leaven the interviews with singularitarian thinkers like Aubrey de Gray, Eric Drexler and Eliezer Yudkowsky, there’ll be a story-line to place the technology in social context. This follows the touching struggle of Kurzweil’s female, electronic alter ego, Ramona, to achieve full person-hood, foiling an attack of self-replicating nanobots on the way, before finally being coached to pass a Turing test by self-help guru Tony Robbins.

For those who might prefer a less dramatic discussion of The Singularity, IEEE Spectrum, is running a special report on the subject in its June edition. According to a press release (via Nanowerk), “the editors invited articles from half a dozen people who have worked on and written about subjects central to the singularity idea in all its loopy glory. They encompass not just hardware and wetware but also economics, consciousness, robotics, nanotechnology, and philosophy.” One of those writers is me; my article, on nanotechnology, ended up with the title “Rupturing the Nanotech Rapture”.

We’ll need to wait a week or two to read the articles – I’m particularly looking forward to reading Christof Koch’s article on machine consciousness, and Alfred Nordmann’s argument against “technological fabulism” and the “baseless extrapolations” it rests on. Of course, even before reading the articles, transhumanist writer Michael Anissimov fears the worst.

Update. The IEEE Spectrum Singularity Special is now online, including my article, Rupturing the Nanotech Rapture. (Thanks to Steven for pointing this out in the comments).

Nanotechnology and visions of the future (part 2)

This is the second part of an article I was asked to write to explain nanotechnology and the debates surrounding it to a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. The first installment can be read here.

Ideologies

There are many debates about nanotechnology; what it is, what it will make possible, and what its dangers might be. On one level these may seem to be very technical in nature. So a question about whether a Drexler style assembler is technically feasible can rapidly descend into details of surface chemistry, while issues about the possible toxicity of carbon nanotubes turn on the procedures for reliable toxicological screening. But it’s at least arguable that the focus on the technical obscures the real causes of the argument, which are actually based on clashes of ideology. What are the ideological divisions that underly debates about nanotechnology?

Transhumanism
Underlying the most radical visons of nanotechnology is an equally radical ideology – transhumanism. The basis of this movement is a teleological view of human progress which views technology as the vehicle, not just for the improvement of the lot of humanity, but for the transcendence of those limitations that non-transhumanists would consider to be an inevitable part of the human condition. The most pressing of these limitations, is of course death, so transhumanists look forward to nanotechnology providing a permanent solution to this problem. In the first instance, this will be effected by nanomedicine, which they anticipate as making cell-by-cell repairs to any damage possible. Beyond this, some transhumanists believe that computers of such power will become available that they will constitute true artificial intelligence. At this point, they imagine a merging of human and machine intelligence, in a way that would effectively constitute the evolution of a new and improved version of humankind.

The notion that the pace of technological change is continually accelerating is an article of faith amongst transhumanists. This leads to the idea that this accelerating rate of change will lead to a point beyond which the future is literally inconceivable. This point they refer to as “the singularity”, and discussions of this hypothetical event take on a highly eschatological tone. This is captured in science fiction writer Cory Doctorow’s dismissive but apt phrase for the singularity: “the rapture of the nerds”.

This worldview carries with it the implication that an accelerating pace of innovation is not just a historical fact, but also a moral imperative. This is because it is through technology that humanity will achieve its destiny, which is nothing less that to transcend its own current physical and mental limitations. The achievement of radical nanotechnology is central to this project, and for this reason transhumanists tend to share a strong conviction not only that radical nanotechnology along Drexlerian lines is possible, but also that its development is morally necessary.

Transhumanism can be considered to be the extreme limit of views that combine strong technological determinism with a highly progressive view of the development of humanity. It is a worldwide movement, but it’s probably fair to say that its natural home is California, its main constituency is amongst those involved in information technology, and it is associated predominantly, if not exclusively, with a strongly libertarian streak of politics, though paradoxically not dissimilar views seem to be attractive to a certain class of former Marxists.

Given that transhumanism as an ideology does not seem to have a great deal of mass appeal, it’s tempting to underplay its importance. This may be a mistake; amongst its adherents are a number of figures with very high media profiles, particularly in the United States, and transhumanist ideas have entered mass culture through science fiction, films and video games. Certainly some conservative and religious figures have felt threatened enough to express some alarm, notably Francis Fukuyama, who has described transhumanism as “the world’s most dangerous idea”.

Global capitalism and the changing innovation landscape
If it is the radical futurism of the transhumanists that has put nanotechnology into popular culture, it is the prospect of money that has excited business and government. Nanotechnology is seen by many worldwide as the major driver of economic growth over the next twenty years, filling the role that information technology has filled over the last twenty years. Breathless projections of huge new markets are commonplace, with the prediction by the US National Nanotechnology Initiative of a trillion dollar market for nanotechnology products by 2015 being the most notorious of these. It is this kind of market projection that underlies a worldwide spending boom on nanotechnology research, which encompasses both the established science and technology powerhouses like the USA, Germany and Japan, but also fast developing countries like China and India.

The emergence of nanotechnology has corresponded with some other interesting changes in the commercial landscape in technologically intensive sectors of the economy. The types of incremental nanotechnology that have been successfully commercialised so far have involved nanoparticles, such as the ones used in sunscreens, or coatings, of the kind used in stain-resistant fabrics. This sort of innovation is the province of the speciality chemicals sector, and one cynical view of the prominence of the nanotechnology label amongst new and old companies is that it has allowed companies in this rather unfashionable sector of the market to rebrand themselves as being part of the newest new thing, with correspondingly higher stock market valuations and easier access to capital. On the other hand, this does perhaps signal a more general change in the way science-driven innovations reach the market.

Many of the large industrial conglomerates that were such a prominent parts of the industrial landscape in Western countries up to the 1980s have been broken up or drastically shrunken. Arguably, the monopoly rents that sustained these combines were what made possible the very large and productive corporate laboratories that were the source of much innovation at that time. This has been replaced by a much more fluid scene in which many functions of companies, including research and innovation, have been outsourced. In this landscape, one finds nanotechnology companies like Oxonica, which are essentially holding companies for intellectual property, with functions that in the past would have been regarded as of core importance, such as manufacturing and marketing, outsourced to contractors, often located in different countries.

Even the remaining large companies have embraced the concept of “open innovation”, in which research and development is regarded as a commodity to be purchased on the open market (and, indeed, outsourced to low cost countries) rather than a core function of the corporation. It is in this light that one should understand the new prominence of intellectual property as something fungible and readily monetised. Universities and other public research institutes, strongly encouraged to seek new sources of funding other than direct government support, have made increasing efforts to spin-out new companies based on intellectual property developed by academic researchers.

In the light of all this, it’s easy to see nanotechnology as one aspect of a more general shift to what the social scientist Michael Gibbons has called Mode II knowledge production[4]. In this view, traditional academic values are being eclipsed by a move to more explicitly goal-oriented and highly interdisciplinary research, in which research priorities are set not by the values of the traditional disciplines, but by perceived market needs and opportunities. It is clear that this transition has been underway for some time in the life sciences, and in this view the emergence of nanotechnology can be seen as a spread of these values to the physical sciences.

Environmentalist opposition
In the UK at least, the opposition to nanotechnology has been spearheaded by two unlikely bedfellows. The issue was first propelled into the news by the intervention of Prince Charles, who raised the subject in newspaper articles in 2003 and 2004. These articles directly echoed concerns raised by the small campaigning group ETC[5]. ETC cast nanotechnology as a direct successor to genetic modification; to summarise this framing, whereas in GM scientists had directly intervened in the code of life, in nanotechnology they meddle with the very atomic structure of matter itself. ETC’s background included a strong record of campaigning on behalf of third world farmers against agricultural biotechnology, so in their view nanotechnology, with its spectre of the possible patenting of new arrangements of atoms and the potential replacement of commodities such as copper and cotton by nanoengineered substitutes controlled by multinationals, was to be opposed as an intrinsic part of the agenda of globalisation. Complementing this rather abstract critique was a much more concrete concern that nanoscale materials might be more toxic than their conventional counterparts, and that current regulatory regimes for the control of environmental exposure to chemicals might not adequately recognise these new dangers.

The latter concern has gained a considerable degree of traction, largely because there has been a very widespread degree of consensus that the issue has some substance. At the time of the Prince’s intervention in the debate (and quite possibly because of it) the UK government commissioned a high-level independent report on the issue from the Royal Society and the Royal Academy of Engineering. This report recommended a program of research and regulatory action on the subject of possible nanoparticle toxicity[6]. Public debate about the risks of nanotechnology has largely focused on this issue, fuelled by a government response to the Royal Society that has been widely considered to be quite inadequate. However, it is possible to regret that the debate has become so focused on this rather technical issue of risk, to the exclusion of wider issues about the potential impacts of nanotechnology on society.

To return to the more fundamental worldviews underlying this critique of nanotechnology, whether they be the rather romantic, ruralist conservatism of the Prince of Wales, or the anti-globalism of ETC, the common feature is a general scepticism about the benefits of scientific and technological “progress”. An extremely eloquent exposition of one version of this point of view is to be found in a book by US journalist Bill McKibben[7]. The title of McKibben’s book – “Enough” – is a succinct summary of its argument; surely we now have enough technology for our needs, and new technology is likely only to lead to further spiritual malaise, through excessive consumerism, or in the case of new and very powerful technologies like genetic modification and nanotechnology, to new and terrifying existential dangers.

Bright greens
Despite the worries about the toxicology of nanoscale particles, and the involvement of groups like ETC, it is notable that all-out opposition to nanotechnology has not yet fully crystallised. In particular, groups such as Greenpeace have not yet articulated a position of unequivocal opposition. This reflects the fact that nanotechnology really does seem to have the potential to provide answers to some pressing environmental problems. For example, there are real hopes that it will lead to new types of solar cells that can be produced cheaply in very large areas. Applications of nanotechnology to problems of water purification and desalination have obvious potential impacts in the developing world. Of course, these kinds of problems have major political and social dimensions, and technical fixes by themselves will not be sufficient. However, the prospects that nanotechnology may be able to make a significant contribution to sustainable development have proved convincing enough to keep mainstream environmental movements at least neutral on the issue.

While some mainstream environmentalists may still remain equivocal in their view of nanotechnology, another group seems to be embracing new technologies with some enthusiasm as providing new ways of maintaining high standards of living in a fully sustainable way. Such “bright greens” dismiss the rejection of industrialised economies and the yearning to return to a rural lifestyle implicit in the “deep green” worldview, and look to the use of new technology, together with imaginative design and planning, to create sustainable urban societies[8]. In this point of view, nanotechnology may help, not just by enabling large scale solar power, but by facilitating an intrinsically less wasteful industrial ecology.

Conclusion

If there is (or indeed, ever was) a time in which there was an “independent republic of science”, disinterestedly pursuing knowledge for its own sake, nanotechnology is not part of it. Nanotechnology, in all its flavours and varieties, is unashamedly “goal-oriented research”. This immediately begs the question “whose goals?” It is this question that underlies recent calls for a greater degree of democratic involvement in setting scientific priorities[9]. It is important that these debates don’t simply concentrate on technical issues. Nanotechnology provides a fascinating and evolving example of the complexity of the interaction between science, technology and wider currents in society. Nanotechnology, with other new and emerging technologies, will have a huge impact on the way society develops over the next twenty to fifty years. Recognising the importance of this impact does not by any means imply that one must take a technologically deterministic view of the future, though. Technology co-evolves with society, and the direction it takes is not necessarily pre-determined. Underlying the directions in which it is steered are a set of competing visions about the directions society should take. These ideologies, which often are left implicit and unexamined, need to be made explicit if a meaningful discussion of the implications of the technology is to take place.

[4] Gibbons, M, et al. (1994) The New Production of Knowledge. London: Sage.
[5] David Berube (in his book Nano-hype, Prometheus, NY 2006) explicitly links the two interventions, and identifies Zac Goldsmith, millionaire organic farmer and editor of “The Ecologist” magazine, as the man who introduced Prince Charles to nanotechnology and the ETC critique. This could be significant, in view of Goldsmith’s current prominence in Conservative Party politics.
[6] Nanoscience and nanotechnologies: opportunities and uncertainties, Royal Society and Royal Academy of Engineering, available from http://www.nanotec.org.uk/finalReport.htm
[7] Enough; staying human in an engineered age, Bill McKibben, Henry Hall, NY (2003)
[8] For a recent manifesto, see Worldchanging: a user’s guide for the 21st century, Alex Steffen (ed.), Harry N. Abrams, NY (2006)
[9] See for example See-through Science: why public engagement needs to move upstream, Rebecca Willis and James Wilsdon, Demos (2004)

Nanotechnology and visions of the future (part 1)

Earlier this year I was asked to write an article explaining nanotechnology and the debates surrounding it for a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. Here is the unedited version, in installments. Regular readers of the blog will be familiar with most of the arguments already, but I hope they will find it interesting to see it all in one place.

Introduction

Few new technologies have been accompanied by such expansive promises of their potential to change the world as nanotechnology. For some, it will lead to a utopia, in which material want has been abolished and disease is a thing of the past, while others see apocalypse and even the extinction of the human race. Governments and multinationals round the world see nanotechnology as an engine of economic growth, while campaigning groups foresee environmental degradation and a widening of the gap between the rich and poor. But at the heart of these arguments lies a striking lack of consensus about what the technology is or will be, what it will make possible and what its dangers might be. Technologies don’t exist or develop in a vacuum, and nanotechnology is no exception; arguments about the likely, or indeed desirable, trajectory of the technology are as much about their protagonists’ broader aspirations for society as about nanotechnology itself.

Possibilities

Nanotechnology is not a single technology in the way that nuclear technology, agricultural biotechnology, or semiconductor technology are. There is, as yet, no distinctive class of artefacts that can be unambiguously labelled as the product of nanotechnology. It is still, by and large, an activity carried out in laboratories rather than factories, yet the distinctive output of nanotechnology is the production and characterisation of some kind of device, rather than the kind of furthering of fundamental understanding that we would expect from a classical discipline such as physics or chemistry.

What unites the rather disparate group of applied sciences that are referred to as nanotechnologies is simply the length-scale on which they operate. Nanotechnology concerns the creation and manipulation of objects whose size lies somewhere between a nanometer and a few hundred nanometers. To put these numbers in context, it’s worth remembering that as unaided humans, we operate over a range of length-scales that spans a factor of a thousand or so, which we could call the macroscale. Thus the largest objects we can manipulate unaided are about a meter or so in size, while the smallest objects we can manipulate comfortably are about one milimeter. With the aid of light microscopes and tools for micromanipulation, we can also operate on another set of smaller lengthscales, which also spans a factor of a thousand. The upper end of the microscale is thus defined by a millimetre, while the lower end is defined by objects about a micron in size. This is roughly the size of a red blood cell or a typical bacteria, and is about the smallest object that can be easily discerned in a light microscope.

The nanoscale is smaller yet. A micron is one thousand nanometers, and one nanometer is about the size of a medium size molecule. So we can think of the lower limit of the nanoscale as being defined by the size of individual atoms and molecules, while the upper limit is defined by the resolution limits of light microscopes (this limit is somewhat more vague, and one sometimes sees apparently more exact definitions, such as 100 nm, but these in my view are entirely arbitrary).

A number of special features make operating in the nanoscale distinctive. Firstly, there is the question of the tools one needs to see nanoscale structures and to characterise them. Conventional light microscopes cannot resolve structures this small. Electron microscopes can achieve atomic resolution, but they are expensive, difficult to use and prone to artefacts. A new class of techniques – scanning probe microscopies such as scanning tunnelling microscopy and atomic force microscopy – have recently become available which can probe the nanoscale, and the uptake of these relatively cheap and accessible methods has been a big factor in creating the field of nanotechnology.

More fundamentally, the properties of matter themselves often change in interesting and unexpected ways when their dimensions are shrunk to the nanoscale. As a particle becomes smaller, it becomes proportionally more influenced by its surface, which often leads to increases in chemical reactivity. These changes may be highly desirable, yielding, for example, better catalysts for more efficiently effecting chemical transformations, or undesirable, in that they can lead to increased toxicity. Quantum mechanical effects can become important, particularly in the way electrons and light interact, and this can lead to striking and useful effects such as size dependent colour changes. (It’s worth stressing here that while quantum mechanics is counter-intuitive and somewhat mysterious to the uninitiated, it is very well understood and produces definite and quantitative predictions. One sometimes reads that “the laws of physics don’t apply at the nanoscale”. This of course is quite wrong; the laws apply just as they do on any other scale, but sometimes they have different consequences). The continuous restless activity of Brownian motion, that is the manifestation of heat energy at the nanoscale, is dominating. These differences in the way physics works at the nanoscale offer opportunities to achieve new effects, but also means that our intuitions may not always be reliable.

One further feature of the nanoscale is that it is the length scale on which the basic machinery of biology operates. Modern molecular biology and biophysics has revealed a great deal about the sub-cellular apparatus of life, revealing the structure and mode of operation of the astonishingly sophisticated molecular-scale machines that are the basis of all organisms. This is significant in a number of ways. Cell biology provides an existence proof that it is possible to make sophisticated machines on the nanoscale and it provides a model for making such machines. It even provides a toolkit of components that can be isolated from living cells and reassembled in synthetic contexts – this is the enterprise of bionanotechnology. The correspondence of length scales also brings hope that nanotechnology will make it possible to make very specific and targeted interventions into biological systems, leading, it is hoped, to new and powerful methods for medical diagnostics and therapeutics.

Nanotechnology, then, is an eclectic mix of disciplines, including elements of chemistry, physics, materials science, electrical engineering, biology and biotechnology. The way this new discipline has emerged from many existing disciplines is itself very interesting, as it illustrates an evolution of the way science is organised and practised that has occurred largely in response to external events.

The founding myth of nanotechnology places its origin in a lecture given by the American physicist Richard Feynman in 1959, published in 1960 under the title “There’s plenty of room at the bottom”. This didn’t explicitly use the word nanotechnology, but it expressed in visionary and exciting terms the many technical possibilities that would open up if one was able to manipulate matter and make engineering devices on the nanoscale. This lecture is widely invoked by enthusiasts for nanotechnology of all types as laying down the fundamental challenges of the subject, its importance endorsed by the iconic status of Feynman as perhaps the greatest native-born American physicist. However, it seems that the identification of this lecture as a foundational document is retrospective, as there is not much evidence that it made a great deal of impact at the time. Feynman himself did not devote very much further work to these ideas, and the paper was rarely cited until the 1990s.

The word nanotechnology itself was coined by the Japanese scientist Norio Taniguchi in 1974 in the context of ultra-high precision machining. However, the writer who unquestionably propelled the word and the idea into the mainstream was K. Eric Drexler. Drexler wrote a popular and bestselling book “Engines of Creation”, published in 1986, which launched a futuristic and radical vision of a nanotechnology that transformed all aspects of society. In Drexler’s vision, which explicitly invoked Feynman’s lecture, tiny assemblers would be able to take apart and put together any type of matter atom by atom. It would be possible to make any kind of product or artefact from its component atoms at virtually no cost, leading to the end of scarcity, and possibly the end of the money economy. Medicine would be revolutionised; tiny robots would be able to repair the damage caused by illness or injury at the level of individual molecules and individual cells. This could lead to the effective abolition of ageing and death, while a seamless integration of physical and cognitive prostheses would lead to new kinds of enhanced humans. On the downside, free-living, self-replicating assemblers could escape into the wild, outcompete natural life-forms by virtue of their superior materials and design, and transform the earth’s ecosphere into “grey goo”. Thus, in the vision of Drexler, nanotechnology was introduced as a technology of such potential power that it could lead either to the transfiguration of humanity or to its extinction.

There are some interesting and significant themes underlying this radical, “Drexlerite” conception of nanotechnology. One of them is the idea of matter as software. Implicit in Drexler’s worldview is the idea that the nature of all matter can be reduced to a set of coordinates of its constituent atoms. Just as music can be coded in digital form on a CD or MP3 file, and moving images can be reduced to a string of bits, it’s possible to imagine any object, whether an everyday tool, a priceless artwork, or even a natural product, being coded as a string of atomic coordinates. Nanotechnology, in this view, provides an interface between the software world and the physical world; an “assembler” or “nanofactory” generates an object just as a digital printer reproduces an image from its digital, software representation. It is this analogy that seems to make the Drexlerian notion of nanotechnology so attractive to the information technology community.

Predictions of what these “nanofactories” might look like have a very mechanistic feel to them. “Engines of Creation” had little in the way of technical detail supporting it, and included some imagery that felt quite organic and biological. However, following the popular success of “Engines”, Drexler developed his ideas at a more detailed level, publishing another, much more technical book in 1992, called “Nanosystems”. This develops a conception of nanotechnology as mechanical engineering shrunk to atomic dimensions, and it is in this form that the idea of nanotechnology has entered the popular consciousness through science fiction, films and video games. Perhaps the best of all these cultural representations is the science fiction novel “The Diamond Age” by Neal Stephenson, whose conscious evocation of a future shaped by a return to Victorian values rather appropriately mirrors the highly mechanical feel of Drexler’s conception of nanotechnology.

The next major development in nanotechnology was arguably political rather than visionary or scientific. In 2000, President Clinton announced a National Nanotechnology Initiative, with funding of $497 million a year. This initiative survived, and even thrived on, the change of administration in the USA, receiving further support, and funding increases from President Bush. Following this very public initiative from the USA, other governments around the world, and the EU, have similarly announced major funding programs. Perhaps the most interesting aspect of this international enthusiasm for nanotechnology at government level is the degree to which it is shared by countries outside those parts of North America, Europe and the Pacific Rim that are traditionally associated with a high intensity of research and development. India, China, Brazil, Iran and South Africa have all designated nanotechnology as a priority area, and in the case of China at least there is some evidence that their performance and output in nanotechnology is beginning to approach or surpass that of some Western countries, including the UK.

Some of the rhetoric associated with the US National Nanotechnology Initiative in its early days was reminiscent of the vision of Drexler – notably, an early document was entitled “Nanotechnology: shaping the world atom by atom”. Perhaps it was useful that such a radical vision for the world changing potential of nanotechnology was present in the background; even if it was not often explicitly invoked, neither did scientists go out of their way to refute it.

This changed in September 2001, when a special issue of the American popular science magazine “Scientific American” contained a number of contributions that were stingingly critical of the Drexler vision of nanotechnology. The most significant of these were by the Harvard nano-chemist George Whitesides, and the Rice University chemist Richard Smalley. Both argued that the Drexler vision of nanoscale machines was simply impossible on technical grounds. Smalley’s contribution was perhaps the most resonant; Smalley had won a Nobel prize for this discovery of a new form of nanoscale carbon, Buckminster fullerene[1], and so his contribution carried significant weight.

The dispute between Smalley and Drexler ran for a while longer, with a published exchange of letters, but its tone became increasingly vituperative. Nonetheless, the result has been that Drexler’s ideas have been largely discredited in both scientific and business circles. The attitude of many scientists is summed up by IBM’s Don Eigler, the first person to demonstrate the controlled manipulation of individual atoms: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them.”[2]

Drexler has thus become a very polarising figure. My own view is that this is unfortunate. I believe that Drexler and his followers have greatly underestimated the technical obstacles in the way of his vision of shrunken mechanical engineering. Drexler does deserve credit, though, for pointing out that the remarkable nanoscale machinery of cell biology does provide an existence proof that a sophisticated nanotechnology is possible. However, I think he went on to draw the wrong conclusion from this. Drexler’s position is essentially that we will be able greatly to surpass the capabilities of biological nanotechnology by using rational engineering principles, rather than the vagaries of evolution, to design these machines, and by using stiff and strong materials rather than diamond rather than the soft and floppy proteins and membranes of biology. I believe that this fails to recognise the fact that physics does look very different at the nanoscale, and that the design principles used in biology are optimised by evolution for this different environment[3]. From this, it follows that a radical nanotechnology might well be possible, but that it will look much more like biology than engineering.

Whether or in what form radical nanotechnology does turn out to be possible, much of what is currently on the market described as nanotechnology is very much more incremental in character. Products such as nano-enabled sunscreens, anti-stain fabric coatings, or “anti-ageing” creams certainly do not have anything to do with sophisticated nanoscale machines; instead they feature materials, coatings and structures which have some dimensions controlled on the nanoscale. These are useful and even potentially lucrative products, but they certainly do not represent any discontinuity with previous technology.

Between the mundane current applications of incremental nanotechnology, and the implausible speculations of the futurists, there are areas in which it is realistic to hope for substantial impacts from nanotechnology. Perhaps the biggest impacts will be seen in the three areas of energy, healthcare and information technology. It’s clear that there will be a huge emphasis in the coming years on finding new, more sustainable ways to obtain and transmit energy. Nanotechnology could make many contributions in areas like better batteries and fuel cells, but arguably its biggest impact could be in making solar energy economically viable on a large scale. The problem with conventional solar cells is not efficiency, but cost and manufacturing scalability. Plenty of solar energy lands on the earth, but the total area of conventional solar cells produced a year is orders of magnitude too small to make a significant dent in the world’s total energy budget. New types of solar cell using nanotechnology, and drawing inspiration from the natural process of photosynthesis, are in principle compatible with large area, low cast processing techniques like printing, and it’s not unrealistic to imagine this kind of solar cell being produced in huge plastic sheets at very low cost. In medicine, if the vision of cell-by-cell surgery using nanosubmarines isn’t going to happen, the prospect of the effectiveness of drugs being increased and their side-effects greatly reduced through the use of nanoscale delivery devices is much more realistic. Much more accurate and fast diagnosis of diseases is also in prospect.

One area in which nanotechnology can already be said to be present in our lives is information technology. The continuous miniaturisation of computing devices has already reached the nanoscale, and this is reflected in the growing impact of information technology on all aspects of the life of most people in the West. It’s interesting that the economic driving force for the continued development of information technologies is no longer computing in its traditional sense, but largely entertainment, through digital music players and digital imaging and video. The continual shrinking of current technologies will probably continue through the dynamic of Moore’s law for ten or fifteen years, allowing at least another hundred-fold increase in computing power. But at this point a number of limits, both physical and economic, are likely to provide serious impediments to further miniaturisation. New nanotechnologies may alter this picture in two ways. It is possible, but by no means certain, that entirely new computing concepts such as quantum computing or molecular electronics may lead to new types of computer of unprecedented power, permitting the further continuation or even acceleration of Moore’s law. On the other hand, developments in plastic electronics may make it possible to make computers that are not especially powerful, but which are very cheap or even disposable. It is this kind of development that is likely to facilitate the idea of “ubiquitous computing” or “the internet of things”, in which it is envisaged that every artefact and product incorporates a computer able to sense its surroundings and to communicate wirelessly with its neighbours. One can see that as a natural, even inevitable, development of technologies like the radio frequency identification devices (RFID) already used as “smart barcodes” by shops like Walmart, but it is clear also that some of the scenarios envisaged could lead to serious concerns about loss of privacy and, potentially, civil liberties.

[1] Nobel Prize for chemistry, 1996, shared with his Rice colleague Robert Curl and the British chemist Sir Harold Kroto, from Sussex University.
[2] Quoted by Chris Toumey in “Reading Feynman Into Nanotech: Does Nanotechnology Descend From Richard Feynman’s 1959 Talk?” (to be published).
[3] This is essentially the argument of my own book “Soft Machines: Nanotechnology and life”, R.A.L. Jones, OUP (2004).

To be continued…

Optimism and pessimism in Norway

I’m in Bergen, Norway, at a conference, Nanomat 2007, run by the Norwegian Research Council. The opening pair of talks – from Wade Adams, of Rice University and Jürgen Altmann, from Bochum, presented an interesting contrast of nano-optimism and nano-pessimism. Here are my notes on the two talks, hopefully more or less reflecting what was said without too much editorial alteration.

The first talk was from Wade Adams, the director of Rice University’s Richard E. Smalley Institute, with the late Richard Smalley’s message “Nanotechnology and Energy: Be a scientist and save the world”. Adams gave the historical background to Smalley’s interest in energy, which began with a talk from a Texan oilman explaining how rapidly oil and gas were likely to run out. Thinking positively, if one has cheap, clean energy most of the problems of the world – lack of clean water, food supply, the environment, even poverty and war – are soluble. This was the motivation for Smalley’s focus on clean energy as the top priority for a technological solution. It’s interesting that climate change and greenhouse gases was not a primary motivation for him; on the other hand he was strongly influenced by Hubbert (see http://www.princeton.edu/hubbert) and his theory of peak oil. Of course, the peak oil theory is controversial (recent a article in Nature – That’s oil, folks, subscription needed – for an overview of the arguments), but whether oil production has already peaked, as the doomsters suggest, or the peak is postponed to 2030, it’s a problem we will face at sometime or other. On the pessimistic side, Adams cited another writer – Mat Simmons – who maintains that oil production in Saudi Arabia – usually considered the reserve of last resort – has already peaked.

Meanwhile on the demand side, we are looking at increasing pressure. Currently 2 billion people have no electricity, 2 billion people rely on biomass for heating and cooking, the world’s population is still increasing and large countries such as India and China are industrialising fast. One should also remember that oil has more valuable uses than simply to be burnt – it’s the vital feedstock for plastics and all kinds of other petrochemicals.

Summarising the figures, the world (in 2003) consumed energy at a rate of 14 terawatts, the majority in the form of oil. By 2050, we’ll need between 30 and 60 terawatts. This can only happen if there is a dramatic change – for example renewable energy stepping up to deliver serious (i.e. measured in terawatts) amounts of power. How can this happen?

The first place to look is probably efficiencies. In the United States, about 60% of energy is currently simply wasted, so simple measures such as using low energy light bulbs and having more fuel-efficient cars can take us a long way.

On the supply side, we need to be hard-headed about evaluating the claims of various technologies in the light of the quantities needed. Wind is probably good for a couple of terawatts at most, and capacity constraints limit the contribution nuclear can make. To get 10 terawatts of nuclear by 2050 we need roughly 10,000 new plants – that’s one built every two days for the next 40 years, which in view of the recent record of nuclear build seems implausible. The reactors would in any case need to be breeders to avoid the consequent uranium shortage. The current emphasis on the hydrogen economy is a red herring, as it is not a primary fuel.

The only remaining solution is solar power. 165,000 TW hits the earth in sunlight. The problem is that the sunlight doesn’t arrive in the right places. Smalley’s solution was a new energy grid system, in which energy is transmitted through wires rather than in tankers. To realise this you need better electrical conductors (either carbon nanotubes or superconductors), and electrical energy storage devices. Of course, Rice University is keen on the nanotube solution. The need is to synthesise large amounts of carbon nanotubes which are all of the same structure, the structure that has metallic properties rather than semiconducting ones. Rice had been awarded $16 million from NASA to develop the scale-up of their process for growing metallic nanotubes by seeded growth, but this grant was cancelled amidst the recent redirection of NASA’s priorities.

Ultimately, Adams was optimistic. In his view, technology will find a solution and it’s more important now to do the politics, get the infrastructure right, and above all to enthuse young people with a sense of mission to become scientists and save the world. His slides can be downloaded here (8.4 MB PDF file).

The second, much more pessimistic, talk was from Jürgen Altmann, a disarmament specialist from Ruhr-Universität Bochum. His title was “Nanotechnology and (International) Society: how to handle the new powerful technologies?” Altmann is a physicist by original training, and is the author of a book, Military nanotechnology: new technology and arms control.

Altmann outlined the ultimate goal of nanotechnology as the full control of the 3-d position of each atom – the role model is the living cell, but the goal goes much beyond this, going beyond systems optimised for aqueous environments to those that work in vacuum, high pressure, space etc., limited only by the laws of nature. Altmann alluded to the controversy surrounding Drexler’s vision of nanotechnology, but insisted that no peer-reviewed publication had succeeded in refuting it.

He mentioned the extrapolations of Moore’s law due to Kurzweil, with the prediction that we will have a computer with a human being’s processing power by 2035. He discussed new nanomaterials, such as ultra-strong carbon nanotubes making the space elevator conceivable, before turning to the Drexler vision of mechanosynthesis, leading to a universal molecular assembler, and discussing consequences like space colonies and brain downloading, before highlighting the contrasting utopian and dystopian visions of the outcome – one the one hand, infinitely long life, wealth without work and clean environment, on the other hand, the consumption of all organic life by proliferating nanorobots (grey goo).

He connected these visions to transhumanism – the idea that we could and should accelerate human evolution by design, and the perhaps better accepted notion of converging technologies – NanoBioInfoCogno – which has taken up somewhat different connotations either side of the Atlantic (Altmann was on the working group which produced the EU document on converging technologies). He foresaw the benefits arising on a 20 year timescale, notably direct broad-band interfaces between brain and machines.

What, then, of the risks? There is the much discussed issue of nanoparticle toxicity. How might nanotechnology affect developing countries – will the advertised benefits really arise? We have seen a mapping of nanotechnology benefits onto the Millennium Development Goals looked by the Meridian Institute. But this has been criticised, for example by N. Invernizzi, (Nanotechnology Law and Business Journal 2 101-11- (2005)). High productivity will mean less demand for labour, there might be a tendency to neglect non-technological solutions, there might be a lack of qualified personnel. He asked what will happen if India and China succeed with nano, will that simply increase internal rich-poor divisions within those countries? The overall conclusion is that socio-economic factors are just as important as technology.

With respect to military nanotechnology, there are many potential applications, including smaller and faster electronics and sensors, lighter and faster armour and armoured vehicles, miniature satellites, including offensive ones. Many robots will be developed, including nano-robots, including biotechnical hybrids – electrode controlled rats and insects. Medical nanobiotechnology will have military applications – capsules for controlled release of biological and chemical agents, mechanisms for targeting agents to specific organs, but also perhaps to specific gene patterns or proteins, allowing chemical or biological warfare to be targeted against specific populations.

Military R&D for nano is mostly done in the USA, where it accounts for 1/4 – 1/3 of federal funding. At the moment, the USA spends 4-10 times as much as the rest of the world, but perhaps we can shortly expect other countries with the necessary capacity, like China and Russia, to begin to catch up.

The problem of military nanotechnology from an arms control point of view is that limitation and verification is very difficult – much more difficult than the control of nuclear technology. Nano is cheap and widespread, much more like biotechnology, with many non-military uses. Small countries and non-state actors can use high technology. To control this will need very intrusive inspection and monitoring – anytime, anyplace. Is this compatible with military interest in secrecy and the fear of industrial espionage?

So, Altmann asks, Is the current international system up to this threat? Probably not, he concludes, so we have two alternatives: increasing military and terrorist threats and marked instability, or the organisation of global security in another way, involving some kind of democratic superstate, in which existing states voluntarily accept reduced sovereignty in return for greater security.

Where should I go to study nanotechnology?

The following is a message from my sponsor… or at least, the institution that pays my salary…

What advice should one give to young people who wish to make a career in nanotechnology? It’s a very technical subject, so you won’t generally get very far without a good degree level grounding in the basic, underlying science and technology. There are some places where one can study for a first degree in nanotechnology, but in my opinion it’s better to obtain a good first degree in one of the basic disciplines – whether a pure science, like physics or chemistry, or an engineering specialism, like electronic engineering or materials science. Then one can broaden one’s education at the postgraduate level, to get the essential interdisciplinary skills that are vital to make progress in nanotechnology. Finally, of course, one usually needs the hands-on experience of research that most people obtain through the apprenticeship of a PhD.

In the UK, the first comprehensive, Masters-level course in Nanoscale Science and Technology was developed jointly by the Universities of Leeds and Sheffield (I was one of the founders of the course). As the subject has developed and the course has flourished, it has been expanded to offer a range of different options – the Nanotechnology Education Portfolio – nanofolio. Currently, we offer MSc courses in Nanoscale Science and Technology (the original, covering the whole gamut of nanotechnology from the soft to the hard), Nanoelectronics and nanomechanics, Nanomaterials for nanoengineering and Bionanotechnology.

The course website also has a general section of resources that we hope will be useful to anybody interested in nanotechnology, beginning with the all-important question “What is nanotechnology?” Many more resources, including images and videos, will be added to the site over the coming months.

Keeping on keeping on

There are some interesting reflections on the recent Ideas Factory Software control of matter from the German journalist Neils Boeing, in a piece called Nano-Elvis vs Nano-Beatles. He draws attention to the irony that a research program with such a Drexlerian feel had as its midwife someone like me, who has been such a vocal critic of Drexlerian ideas. The title comes from an analogy which I find very flattering, if not entirely convincing – roughly translated from the German, he says: “It’s intringuingly reminiscent of the history of pop music, which developed by a transatlantic exchange. The American Elvis began things, but it was the British Beatles who really got the epochal phenomenon rolling. The solo artist Drexler launched his vision on the world, but in practise the crucial developments could made by a British big band of researchers. We have just one wish for the Brits – keep on rocking!” Would that it were so.

In other media, there’s an article by me in the launch issue of the new nanotechnology magazine from the UK’s Insititute of Nanotechnology – NanoNow! (PDF, freely downloadable). My article has the strap-line “Only Skin Deep – Cosmetics companies are using nano-products to tart up their face creams and sun lotions. But are they safe? Richard A.L. Jones unmasks the truth.” I certainly wouldn’t claim to unmask the truth about controversial issues like the use of C60 in face-creams, but I hope I managed to shed a little light on a very murky and much discussed subject.

My column in Nature Nanotechnology this month is called “Can nanotechnology ever prove that it is green?” This is only available to subscribers. As Samuel Johnson wrote, “No man but a blockhead ever wrote, except for money.” I don’t think he would have approved of blogs.

New projects for the Software Control of Matter

The Ideas Factory on Software Control of Matter that has been dominating my life for the last couple of weeks has produced its outcome, and brief descriptions of the three projects that are likely to go forward for funding have been published on the Ideas Factory blog. There are two experimental projects, Software-controlled assembly of oligomers aims to build a machine to synthesise a controlled sequence of molecular building blocks from a sequence coded by a stretch of DNA, while Directed Reconfigurable Nanomachines aims to use the positional assembly of molecules and nanoscale building blocks to make prototype functional nanoscale devices. The Matter Compiler brings together computer science and computational chemistry and materials science to prototype the implementation of the engineering control and computer science aspects of directed molecular assembly. Between them, these projects will be initially funded to the tune of the £1.5 million set aside for the Ideas Factory. But there’s no doubt in my mind that the ideas generated during the week are worthy of a lot more support than this in the future.