Three good reasons to do nanotechnology: 1. For a sustainable energy economy

When I was in Norway a few months ago, I was talking to an official from their research council about the Norwegian national nanotechnology strategy. He explained how they were going to focus on a few appplication areas for nanotechnology, starting with nanotechnology for energy, nanotechnology for medicine, and nanotechnology for information technology. Thus far his list was very similar to lists being compiled by just about everybody else in the world. Then he went on to explain that the fourth area would be nanotechnology for fish and I had to admit to myself that the latter focus probably would be nationally distinctive. Fish apart, there does seem to be a widespread consensus that the other three areas are the ones in which nanotechnology is likely to make the biggest global impact, at least on the short to medium term. It’s worth summarising some of the arguments for this order of priority.

1. Nanotechnology for a sustainable energy economy. This comes first because our current way of life is utterly dependent on cheap and abundant energy, and there are no easy ways of significantly lessening this dependence. Yet the cheap energy that we’ve come to rely on is threatened in multiple ways. The need to reduce CO2 emissions to combat climate change is growing in urgency, the geopolitical implications of such a vital commodity being in the control of people and nations whose interests may not be the same as ours are becoming more and more obvious, and the prospect of the exhaustion of the most convenient forms of fossil fuel – gas and oil – is appearing on the horizon. It’s no surprise, the, that both private sector investments and government funded research in nanotechnology is increasingly being directed in applications to energy.

So, how could nanotechnology make an impact on our evolving energy economy? Let’s look at this in three categories:

1. Primary energy sources. At the moment, the ultimate sources of most of our energy are oil and gas, either used directly or converted into electricity, and electricity made by burning coal or by harnessing nuclear fission. Renewables – primarily hydroelectric at the moment, with wind power growing, make a small contribution. Nanotechnology’s most significant potential contribution is in the area of solar energy, where alternative photovoltaics capable of being produced cheaply in the very large areas needed to supply significant amounts of power are on the horizon.

2. Energy for transportation. Our societies are dependent on large scale mobility, both personal and for the movement of goods across the world. Liquid hydrocarbons – in the form of petrol, diesel and aviation kerosene – are convenient, high energy density fuels, and a massive infrastructure exists to distribute them. The “hydrogen economy” offers an alternative, in which the transport fuel would be hydrogen, made using primary energy sources like solar energy, nuclear energy, or a combination of fossil fuel use with CO2 sequestration. Nanotechnology could help overcome some of the formidable technical barriers to this scheme, by making possible safe, high density storage for energy and by improving the performance and price of fuel cells. On the other hand, as the recognition of the economic and technical barriers to a hydrogen economy grows, the alternative of a “methanol economy” grows more attractive in some people’s eyes. Using methanol as a transportation fuel has the great advantage that one can use the existing infrastructure for distributing liquid fuels, and continue to use internal combustion engines. An ideal would be to make methanol directly using solar energy to combine water and carbon dioxide – photocatalytic reduction of carbon dioxide. This is something we know ought to be possible in principle, but we don’t know how to do it yet.

3. Lowering the energy intensity of the economy. There are a host of possible incremental improvements in materials and processes to reduce the amount of primary energy needed to produce a given amount of economic output. Individually these may not look spectacular, but together the effect may be very significant. This ranges from more efficient light sources such as light emitting diodes, better materials for building insulation to better materials and coatings allowing turbine blades to be operated hotter, leading to higher energy conversion efficiencies in power stations.

Next – nanotechnology for medicine and health

Nanotechnology and visions of the future (part 2)

This is the second part of an article I was asked to write to explain nanotechnology and the debates surrounding it to a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. The first installment can be read here.

Ideologies

There are many debates about nanotechnology; what it is, what it will make possible, and what its dangers might be. On one level these may seem to be very technical in nature. So a question about whether a Drexler style assembler is technically feasible can rapidly descend into details of surface chemistry, while issues about the possible toxicity of carbon nanotubes turn on the procedures for reliable toxicological screening. But it’s at least arguable that the focus on the technical obscures the real causes of the argument, which are actually based on clashes of ideology. What are the ideological divisions that underly debates about nanotechnology?

Transhumanism
Underlying the most radical visons of nanotechnology is an equally radical ideology – transhumanism. The basis of this movement is a teleological view of human progress which views technology as the vehicle, not just for the improvement of the lot of humanity, but for the transcendence of those limitations that non-transhumanists would consider to be an inevitable part of the human condition. The most pressing of these limitations, is of course death, so transhumanists look forward to nanotechnology providing a permanent solution to this problem. In the first instance, this will be effected by nanomedicine, which they anticipate as making cell-by-cell repairs to any damage possible. Beyond this, some transhumanists believe that computers of such power will become available that they will constitute true artificial intelligence. At this point, they imagine a merging of human and machine intelligence, in a way that would effectively constitute the evolution of a new and improved version of humankind.

The notion that the pace of technological change is continually accelerating is an article of faith amongst transhumanists. This leads to the idea that this accelerating rate of change will lead to a point beyond which the future is literally inconceivable. This point they refer to as “the singularity”, and discussions of this hypothetical event take on a highly eschatological tone. This is captured in science fiction writer Cory Doctorow’s dismissive but apt phrase for the singularity: “the rapture of the nerds”.

This worldview carries with it the implication that an accelerating pace of innovation is not just a historical fact, but also a moral imperative. This is because it is through technology that humanity will achieve its destiny, which is nothing less that to transcend its own current physical and mental limitations. The achievement of radical nanotechnology is central to this project, and for this reason transhumanists tend to share a strong conviction not only that radical nanotechnology along Drexlerian lines is possible, but also that its development is morally necessary.

Transhumanism can be considered to be the extreme limit of views that combine strong technological determinism with a highly progressive view of the development of humanity. It is a worldwide movement, but it’s probably fair to say that its natural home is California, its main constituency is amongst those involved in information technology, and it is associated predominantly, if not exclusively, with a strongly libertarian streak of politics, though paradoxically not dissimilar views seem to be attractive to a certain class of former Marxists.

Given that transhumanism as an ideology does not seem to have a great deal of mass appeal, it’s tempting to underplay its importance. This may be a mistake; amongst its adherents are a number of figures with very high media profiles, particularly in the United States, and transhumanist ideas have entered mass culture through science fiction, films and video games. Certainly some conservative and religious figures have felt threatened enough to express some alarm, notably Francis Fukuyama, who has described transhumanism as “the world’s most dangerous idea”.

Global capitalism and the changing innovation landscape
If it is the radical futurism of the transhumanists that has put nanotechnology into popular culture, it is the prospect of money that has excited business and government. Nanotechnology is seen by many worldwide as the major driver of economic growth over the next twenty years, filling the role that information technology has filled over the last twenty years. Breathless projections of huge new markets are commonplace, with the prediction by the US National Nanotechnology Initiative of a trillion dollar market for nanotechnology products by 2015 being the most notorious of these. It is this kind of market projection that underlies a worldwide spending boom on nanotechnology research, which encompasses both the established science and technology powerhouses like the USA, Germany and Japan, but also fast developing countries like China and India.

The emergence of nanotechnology has corresponded with some other interesting changes in the commercial landscape in technologically intensive sectors of the economy. The types of incremental nanotechnology that have been successfully commercialised so far have involved nanoparticles, such as the ones used in sunscreens, or coatings, of the kind used in stain-resistant fabrics. This sort of innovation is the province of the speciality chemicals sector, and one cynical view of the prominence of the nanotechnology label amongst new and old companies is that it has allowed companies in this rather unfashionable sector of the market to rebrand themselves as being part of the newest new thing, with correspondingly higher stock market valuations and easier access to capital. On the other hand, this does perhaps signal a more general change in the way science-driven innovations reach the market.

Many of the large industrial conglomerates that were such a prominent parts of the industrial landscape in Western countries up to the 1980s have been broken up or drastically shrunken. Arguably, the monopoly rents that sustained these combines were what made possible the very large and productive corporate laboratories that were the source of much innovation at that time. This has been replaced by a much more fluid scene in which many functions of companies, including research and innovation, have been outsourced. In this landscape, one finds nanotechnology companies like Oxonica, which are essentially holding companies for intellectual property, with functions that in the past would have been regarded as of core importance, such as manufacturing and marketing, outsourced to contractors, often located in different countries.

Even the remaining large companies have embraced the concept of “open innovation”, in which research and development is regarded as a commodity to be purchased on the open market (and, indeed, outsourced to low cost countries) rather than a core function of the corporation. It is in this light that one should understand the new prominence of intellectual property as something fungible and readily monetised. Universities and other public research institutes, strongly encouraged to seek new sources of funding other than direct government support, have made increasing efforts to spin-out new companies based on intellectual property developed by academic researchers.

In the light of all this, it’s easy to see nanotechnology as one aspect of a more general shift to what the social scientist Michael Gibbons has called Mode II knowledge production[4]. In this view, traditional academic values are being eclipsed by a move to more explicitly goal-oriented and highly interdisciplinary research, in which research priorities are set not by the values of the traditional disciplines, but by perceived market needs and opportunities. It is clear that this transition has been underway for some time in the life sciences, and in this view the emergence of nanotechnology can be seen as a spread of these values to the physical sciences.

Environmentalist opposition
In the UK at least, the opposition to nanotechnology has been spearheaded by two unlikely bedfellows. The issue was first propelled into the news by the intervention of Prince Charles, who raised the subject in newspaper articles in 2003 and 2004. These articles directly echoed concerns raised by the small campaigning group ETC[5]. ETC cast nanotechnology as a direct successor to genetic modification; to summarise this framing, whereas in GM scientists had directly intervened in the code of life, in nanotechnology they meddle with the very atomic structure of matter itself. ETC’s background included a strong record of campaigning on behalf of third world farmers against agricultural biotechnology, so in their view nanotechnology, with its spectre of the possible patenting of new arrangements of atoms and the potential replacement of commodities such as copper and cotton by nanoengineered substitutes controlled by multinationals, was to be opposed as an intrinsic part of the agenda of globalisation. Complementing this rather abstract critique was a much more concrete concern that nanoscale materials might be more toxic than their conventional counterparts, and that current regulatory regimes for the control of environmental exposure to chemicals might not adequately recognise these new dangers.

The latter concern has gained a considerable degree of traction, largely because there has been a very widespread degree of consensus that the issue has some substance. At the time of the Prince’s intervention in the debate (and quite possibly because of it) the UK government commissioned a high-level independent report on the issue from the Royal Society and the Royal Academy of Engineering. This report recommended a program of research and regulatory action on the subject of possible nanoparticle toxicity[6]. Public debate about the risks of nanotechnology has largely focused on this issue, fuelled by a government response to the Royal Society that has been widely considered to be quite inadequate. However, it is possible to regret that the debate has become so focused on this rather technical issue of risk, to the exclusion of wider issues about the potential impacts of nanotechnology on society.

To return to the more fundamental worldviews underlying this critique of nanotechnology, whether they be the rather romantic, ruralist conservatism of the Prince of Wales, or the anti-globalism of ETC, the common feature is a general scepticism about the benefits of scientific and technological “progress”. An extremely eloquent exposition of one version of this point of view is to be found in a book by US journalist Bill McKibben[7]. The title of McKibben’s book – “Enough” – is a succinct summary of its argument; surely we now have enough technology for our needs, and new technology is likely only to lead to further spiritual malaise, through excessive consumerism, or in the case of new and very powerful technologies like genetic modification and nanotechnology, to new and terrifying existential dangers.

Bright greens
Despite the worries about the toxicology of nanoscale particles, and the involvement of groups like ETC, it is notable that all-out opposition to nanotechnology has not yet fully crystallised. In particular, groups such as Greenpeace have not yet articulated a position of unequivocal opposition. This reflects the fact that nanotechnology really does seem to have the potential to provide answers to some pressing environmental problems. For example, there are real hopes that it will lead to new types of solar cells that can be produced cheaply in very large areas. Applications of nanotechnology to problems of water purification and desalination have obvious potential impacts in the developing world. Of course, these kinds of problems have major political and social dimensions, and technical fixes by themselves will not be sufficient. However, the prospects that nanotechnology may be able to make a significant contribution to sustainable development have proved convincing enough to keep mainstream environmental movements at least neutral on the issue.

While some mainstream environmentalists may still remain equivocal in their view of nanotechnology, another group seems to be embracing new technologies with some enthusiasm as providing new ways of maintaining high standards of living in a fully sustainable way. Such “bright greens” dismiss the rejection of industrialised economies and the yearning to return to a rural lifestyle implicit in the “deep green” worldview, and look to the use of new technology, together with imaginative design and planning, to create sustainable urban societies[8]. In this point of view, nanotechnology may help, not just by enabling large scale solar power, but by facilitating an intrinsically less wasteful industrial ecology.

Conclusion

If there is (or indeed, ever was) a time in which there was an “independent republic of science”, disinterestedly pursuing knowledge for its own sake, nanotechnology is not part of it. Nanotechnology, in all its flavours and varieties, is unashamedly “goal-oriented research”. This immediately begs the question “whose goals?” It is this question that underlies recent calls for a greater degree of democratic involvement in setting scientific priorities[9]. It is important that these debates don’t simply concentrate on technical issues. Nanotechnology provides a fascinating and evolving example of the complexity of the interaction between science, technology and wider currents in society. Nanotechnology, with other new and emerging technologies, will have a huge impact on the way society develops over the next twenty to fifty years. Recognising the importance of this impact does not by any means imply that one must take a technologically deterministic view of the future, though. Technology co-evolves with society, and the direction it takes is not necessarily pre-determined. Underlying the directions in which it is steered are a set of competing visions about the directions society should take. These ideologies, which often are left implicit and unexamined, need to be made explicit if a meaningful discussion of the implications of the technology is to take place.

[4] Gibbons, M, et al. (1994) The New Production of Knowledge. London: Sage.
[5] David Berube (in his book Nano-hype, Prometheus, NY 2006) explicitly links the two interventions, and identifies Zac Goldsmith, millionaire organic farmer and editor of “The Ecologist” magazine, as the man who introduced Prince Charles to nanotechnology and the ETC critique. This could be significant, in view of Goldsmith’s current prominence in Conservative Party politics.
[6] Nanoscience and nanotechnologies: opportunities and uncertainties, Royal Society and Royal Academy of Engineering, available from http://www.nanotec.org.uk/finalReport.htm
[7] Enough; staying human in an engineered age, Bill McKibben, Henry Hall, NY (2003)
[8] For a recent manifesto, see Worldchanging: a user’s guide for the 21st century, Alex Steffen (ed.), Harry N. Abrams, NY (2006)
[9] See for example See-through Science: why public engagement needs to move upstream, Rebecca Willis and James Wilsdon, Demos (2004)

Nanotechnology and visions of the future (part 1)

Earlier this year I was asked to write an article explaining nanotechnology and the debates surrounding it for a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. Here is the unedited version, in installments. Regular readers of the blog will be familiar with most of the arguments already, but I hope they will find it interesting to see it all in one place.

Introduction

Few new technologies have been accompanied by such expansive promises of their potential to change the world as nanotechnology. For some, it will lead to a utopia, in which material want has been abolished and disease is a thing of the past, while others see apocalypse and even the extinction of the human race. Governments and multinationals round the world see nanotechnology as an engine of economic growth, while campaigning groups foresee environmental degradation and a widening of the gap between the rich and poor. But at the heart of these arguments lies a striking lack of consensus about what the technology is or will be, what it will make possible and what its dangers might be. Technologies don’t exist or develop in a vacuum, and nanotechnology is no exception; arguments about the likely, or indeed desirable, trajectory of the technology are as much about their protagonists’ broader aspirations for society as about nanotechnology itself.

Possibilities

Nanotechnology is not a single technology in the way that nuclear technology, agricultural biotechnology, or semiconductor technology are. There is, as yet, no distinctive class of artefacts that can be unambiguously labelled as the product of nanotechnology. It is still, by and large, an activity carried out in laboratories rather than factories, yet the distinctive output of nanotechnology is the production and characterisation of some kind of device, rather than the kind of furthering of fundamental understanding that we would expect from a classical discipline such as physics or chemistry.

What unites the rather disparate group of applied sciences that are referred to as nanotechnologies is simply the length-scale on which they operate. Nanotechnology concerns the creation and manipulation of objects whose size lies somewhere between a nanometer and a few hundred nanometers. To put these numbers in context, it’s worth remembering that as unaided humans, we operate over a range of length-scales that spans a factor of a thousand or so, which we could call the macroscale. Thus the largest objects we can manipulate unaided are about a meter or so in size, while the smallest objects we can manipulate comfortably are about one milimeter. With the aid of light microscopes and tools for micromanipulation, we can also operate on another set of smaller lengthscales, which also spans a factor of a thousand. The upper end of the microscale is thus defined by a millimetre, while the lower end is defined by objects about a micron in size. This is roughly the size of a red blood cell or a typical bacteria, and is about the smallest object that can be easily discerned in a light microscope.

The nanoscale is smaller yet. A micron is one thousand nanometers, and one nanometer is about the size of a medium size molecule. So we can think of the lower limit of the nanoscale as being defined by the size of individual atoms and molecules, while the upper limit is defined by the resolution limits of light microscopes (this limit is somewhat more vague, and one sometimes sees apparently more exact definitions, such as 100 nm, but these in my view are entirely arbitrary).

A number of special features make operating in the nanoscale distinctive. Firstly, there is the question of the tools one needs to see nanoscale structures and to characterise them. Conventional light microscopes cannot resolve structures this small. Electron microscopes can achieve atomic resolution, but they are expensive, difficult to use and prone to artefacts. A new class of techniques – scanning probe microscopies such as scanning tunnelling microscopy and atomic force microscopy – have recently become available which can probe the nanoscale, and the uptake of these relatively cheap and accessible methods has been a big factor in creating the field of nanotechnology.

More fundamentally, the properties of matter themselves often change in interesting and unexpected ways when their dimensions are shrunk to the nanoscale. As a particle becomes smaller, it becomes proportionally more influenced by its surface, which often leads to increases in chemical reactivity. These changes may be highly desirable, yielding, for example, better catalysts for more efficiently effecting chemical transformations, or undesirable, in that they can lead to increased toxicity. Quantum mechanical effects can become important, particularly in the way electrons and light interact, and this can lead to striking and useful effects such as size dependent colour changes. (It’s worth stressing here that while quantum mechanics is counter-intuitive and somewhat mysterious to the uninitiated, it is very well understood and produces definite and quantitative predictions. One sometimes reads that “the laws of physics don’t apply at the nanoscale”. This of course is quite wrong; the laws apply just as they do on any other scale, but sometimes they have different consequences). The continuous restless activity of Brownian motion, that is the manifestation of heat energy at the nanoscale, is dominating. These differences in the way physics works at the nanoscale offer opportunities to achieve new effects, but also means that our intuitions may not always be reliable.

One further feature of the nanoscale is that it is the length scale on which the basic machinery of biology operates. Modern molecular biology and biophysics has revealed a great deal about the sub-cellular apparatus of life, revealing the structure and mode of operation of the astonishingly sophisticated molecular-scale machines that are the basis of all organisms. This is significant in a number of ways. Cell biology provides an existence proof that it is possible to make sophisticated machines on the nanoscale and it provides a model for making such machines. It even provides a toolkit of components that can be isolated from living cells and reassembled in synthetic contexts – this is the enterprise of bionanotechnology. The correspondence of length scales also brings hope that nanotechnology will make it possible to make very specific and targeted interventions into biological systems, leading, it is hoped, to new and powerful methods for medical diagnostics and therapeutics.

Nanotechnology, then, is an eclectic mix of disciplines, including elements of chemistry, physics, materials science, electrical engineering, biology and biotechnology. The way this new discipline has emerged from many existing disciplines is itself very interesting, as it illustrates an evolution of the way science is organised and practised that has occurred largely in response to external events.

The founding myth of nanotechnology places its origin in a lecture given by the American physicist Richard Feynman in 1959, published in 1960 under the title “There’s plenty of room at the bottom”. This didn’t explicitly use the word nanotechnology, but it expressed in visionary and exciting terms the many technical possibilities that would open up if one was able to manipulate matter and make engineering devices on the nanoscale. This lecture is widely invoked by enthusiasts for nanotechnology of all types as laying down the fundamental challenges of the subject, its importance endorsed by the iconic status of Feynman as perhaps the greatest native-born American physicist. However, it seems that the identification of this lecture as a foundational document is retrospective, as there is not much evidence that it made a great deal of impact at the time. Feynman himself did not devote very much further work to these ideas, and the paper was rarely cited until the 1990s.

The word nanotechnology itself was coined by the Japanese scientist Norio Taniguchi in 1974 in the context of ultra-high precision machining. However, the writer who unquestionably propelled the word and the idea into the mainstream was K. Eric Drexler. Drexler wrote a popular and bestselling book “Engines of Creation”, published in 1986, which launched a futuristic and radical vision of a nanotechnology that transformed all aspects of society. In Drexler’s vision, which explicitly invoked Feynman’s lecture, tiny assemblers would be able to take apart and put together any type of matter atom by atom. It would be possible to make any kind of product or artefact from its component atoms at virtually no cost, leading to the end of scarcity, and possibly the end of the money economy. Medicine would be revolutionised; tiny robots would be able to repair the damage caused by illness or injury at the level of individual molecules and individual cells. This could lead to the effective abolition of ageing and death, while a seamless integration of physical and cognitive prostheses would lead to new kinds of enhanced humans. On the downside, free-living, self-replicating assemblers could escape into the wild, outcompete natural life-forms by virtue of their superior materials and design, and transform the earth’s ecosphere into “grey goo”. Thus, in the vision of Drexler, nanotechnology was introduced as a technology of such potential power that it could lead either to the transfiguration of humanity or to its extinction.

There are some interesting and significant themes underlying this radical, “Drexlerite” conception of nanotechnology. One of them is the idea of matter as software. Implicit in Drexler’s worldview is the idea that the nature of all matter can be reduced to a set of coordinates of its constituent atoms. Just as music can be coded in digital form on a CD or MP3 file, and moving images can be reduced to a string of bits, it’s possible to imagine any object, whether an everyday tool, a priceless artwork, or even a natural product, being coded as a string of atomic coordinates. Nanotechnology, in this view, provides an interface between the software world and the physical world; an “assembler” or “nanofactory” generates an object just as a digital printer reproduces an image from its digital, software representation. It is this analogy that seems to make the Drexlerian notion of nanotechnology so attractive to the information technology community.

Predictions of what these “nanofactories” might look like have a very mechanistic feel to them. “Engines of Creation” had little in the way of technical detail supporting it, and included some imagery that felt quite organic and biological. However, following the popular success of “Engines”, Drexler developed his ideas at a more detailed level, publishing another, much more technical book in 1992, called “Nanosystems”. This develops a conception of nanotechnology as mechanical engineering shrunk to atomic dimensions, and it is in this form that the idea of nanotechnology has entered the popular consciousness through science fiction, films and video games. Perhaps the best of all these cultural representations is the science fiction novel “The Diamond Age” by Neal Stephenson, whose conscious evocation of a future shaped by a return to Victorian values rather appropriately mirrors the highly mechanical feel of Drexler’s conception of nanotechnology.

The next major development in nanotechnology was arguably political rather than visionary or scientific. In 2000, President Clinton announced a National Nanotechnology Initiative, with funding of $497 million a year. This initiative survived, and even thrived on, the change of administration in the USA, receiving further support, and funding increases from President Bush. Following this very public initiative from the USA, other governments around the world, and the EU, have similarly announced major funding programs. Perhaps the most interesting aspect of this international enthusiasm for nanotechnology at government level is the degree to which it is shared by countries outside those parts of North America, Europe and the Pacific Rim that are traditionally associated with a high intensity of research and development. India, China, Brazil, Iran and South Africa have all designated nanotechnology as a priority area, and in the case of China at least there is some evidence that their performance and output in nanotechnology is beginning to approach or surpass that of some Western countries, including the UK.

Some of the rhetoric associated with the US National Nanotechnology Initiative in its early days was reminiscent of the vision of Drexler – notably, an early document was entitled “Nanotechnology: shaping the world atom by atom”. Perhaps it was useful that such a radical vision for the world changing potential of nanotechnology was present in the background; even if it was not often explicitly invoked, neither did scientists go out of their way to refute it.

This changed in September 2001, when a special issue of the American popular science magazine “Scientific American” contained a number of contributions that were stingingly critical of the Drexler vision of nanotechnology. The most significant of these were by the Harvard nano-chemist George Whitesides, and the Rice University chemist Richard Smalley. Both argued that the Drexler vision of nanoscale machines was simply impossible on technical grounds. Smalley’s contribution was perhaps the most resonant; Smalley had won a Nobel prize for this discovery of a new form of nanoscale carbon, Buckminster fullerene[1], and so his contribution carried significant weight.

The dispute between Smalley and Drexler ran for a while longer, with a published exchange of letters, but its tone became increasingly vituperative. Nonetheless, the result has been that Drexler’s ideas have been largely discredited in both scientific and business circles. The attitude of many scientists is summed up by IBM’s Don Eigler, the first person to demonstrate the controlled manipulation of individual atoms: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them.”[2]

Drexler has thus become a very polarising figure. My own view is that this is unfortunate. I believe that Drexler and his followers have greatly underestimated the technical obstacles in the way of his vision of shrunken mechanical engineering. Drexler does deserve credit, though, for pointing out that the remarkable nanoscale machinery of cell biology does provide an existence proof that a sophisticated nanotechnology is possible. However, I think he went on to draw the wrong conclusion from this. Drexler’s position is essentially that we will be able greatly to surpass the capabilities of biological nanotechnology by using rational engineering principles, rather than the vagaries of evolution, to design these machines, and by using stiff and strong materials rather than diamond rather than the soft and floppy proteins and membranes of biology. I believe that this fails to recognise the fact that physics does look very different at the nanoscale, and that the design principles used in biology are optimised by evolution for this different environment[3]. From this, it follows that a radical nanotechnology might well be possible, but that it will look much more like biology than engineering.

Whether or in what form radical nanotechnology does turn out to be possible, much of what is currently on the market described as nanotechnology is very much more incremental in character. Products such as nano-enabled sunscreens, anti-stain fabric coatings, or “anti-ageing” creams certainly do not have anything to do with sophisticated nanoscale machines; instead they feature materials, coatings and structures which have some dimensions controlled on the nanoscale. These are useful and even potentially lucrative products, but they certainly do not represent any discontinuity with previous technology.

Between the mundane current applications of incremental nanotechnology, and the implausible speculations of the futurists, there are areas in which it is realistic to hope for substantial impacts from nanotechnology. Perhaps the biggest impacts will be seen in the three areas of energy, healthcare and information technology. It’s clear that there will be a huge emphasis in the coming years on finding new, more sustainable ways to obtain and transmit energy. Nanotechnology could make many contributions in areas like better batteries and fuel cells, but arguably its biggest impact could be in making solar energy economically viable on a large scale. The problem with conventional solar cells is not efficiency, but cost and manufacturing scalability. Plenty of solar energy lands on the earth, but the total area of conventional solar cells produced a year is orders of magnitude too small to make a significant dent in the world’s total energy budget. New types of solar cell using nanotechnology, and drawing inspiration from the natural process of photosynthesis, are in principle compatible with large area, low cast processing techniques like printing, and it’s not unrealistic to imagine this kind of solar cell being produced in huge plastic sheets at very low cost. In medicine, if the vision of cell-by-cell surgery using nanosubmarines isn’t going to happen, the prospect of the effectiveness of drugs being increased and their side-effects greatly reduced through the use of nanoscale delivery devices is much more realistic. Much more accurate and fast diagnosis of diseases is also in prospect.

One area in which nanotechnology can already be said to be present in our lives is information technology. The continuous miniaturisation of computing devices has already reached the nanoscale, and this is reflected in the growing impact of information technology on all aspects of the life of most people in the West. It’s interesting that the economic driving force for the continued development of information technologies is no longer computing in its traditional sense, but largely entertainment, through digital music players and digital imaging and video. The continual shrinking of current technologies will probably continue through the dynamic of Moore’s law for ten or fifteen years, allowing at least another hundred-fold increase in computing power. But at this point a number of limits, both physical and economic, are likely to provide serious impediments to further miniaturisation. New nanotechnologies may alter this picture in two ways. It is possible, but by no means certain, that entirely new computing concepts such as quantum computing or molecular electronics may lead to new types of computer of unprecedented power, permitting the further continuation or even acceleration of Moore’s law. On the other hand, developments in plastic electronics may make it possible to make computers that are not especially powerful, but which are very cheap or even disposable. It is this kind of development that is likely to facilitate the idea of “ubiquitous computing” or “the internet of things”, in which it is envisaged that every artefact and product incorporates a computer able to sense its surroundings and to communicate wirelessly with its neighbours. One can see that as a natural, even inevitable, development of technologies like the radio frequency identification devices (RFID) already used as “smart barcodes” by shops like Walmart, but it is clear also that some of the scenarios envisaged could lead to serious concerns about loss of privacy and, potentially, civil liberties.

[1] Nobel Prize for chemistry, 1996, shared with his Rice colleague Robert Curl and the British chemist Sir Harold Kroto, from Sussex University.
[2] Quoted by Chris Toumey in “Reading Feynman Into Nanotech: Does Nanotechnology Descend From Richard Feynman’s 1959 Talk?” (to be published).
[3] This is essentially the argument of my own book “Soft Machines: Nanotechnology and life”, R.A.L. Jones, OUP (2004).

To be continued…

New routes to solar energy: the UK announces more research cash

The agency primarily responsible for distributing government research money for nanotechnology in the UK, the Engineering and Physical Sciences Research Council, announced a pair of linked programmes today which substantially increase the funding available for research into new, nano-enabled routes for harnessing solar energy. The first of the Nanotechnology Grand Challenges, which form part of the EPSRC’s new nanotechnology strategy, is looking for large-scale, integrated projects exploiting nanotechnology to enable cheap, efficient and scalable ways to harvest solar energy, with an emphasis on new solar cell technology. The other call, Chemical and Biochemical Solar Energy Conversion, is focussed on biological fuel production, photochemical fuel production and the underpinning fundamental science that enables these processes. Between the two calls, around £8 million (~ US $16 million) is on offer in the first stage, with more promised for continuations of the most successful projects.

I wrote a month ago about the various ways in which nanotechnology might make solar energy, which has the potential to supply all the energy needs of the modern industrial world, more economically and practically viable. The oldest of these technologies – the dye sensitised nano-titania cell invented by EPFL’s Michael Grätzel – is now moving towards full production, with the company G24 Innovations having opened a factory in Wales, in partnership with Konarka. Other technologies such as polymer and hybrid solar cells need more work to become commercial.

Using solar energy to create, not electricity, but fuel, for example for transportation, is a related area of great promise. Some work is already going on developing analogues to photosynthetic systems for using light to split water into hydrogen. A truly grand challenge here would be to devise a system for photochemically reducing carbon dioxide. Think of a system in which one took carbon dioxide (perhaps from the atmosphere) and combined it with water with the aid of a couple of photons of light to make, say, methanol, which could directly be used in your internal combustion engine powered car. It’s possible in principle, one just has to find the right catalyst….

The limits of public engagement

Over on Nanodot, Christine Peterson picks up on some comments I made about public engagement in the Foreword to the final report of the Nanotechnology Engagement Group – Democratic technologies?. Having enumerated some of the problems and difficulties of seeking public engagement about nanotechnology, I finished with the positive words “I believe that the activities outlined in this report are just the start of a very positive movement that seeks to answer a compelling question: how can we ensure that the scientific enterprise is directed in pursuit of societal goals that command broad democratic support?

“That last question is a tough one,” Christine writes. She raises two interesting questions on the back of this. “Public research funds should go toward goals supported by the public, and our representative governmental systems are supposed to ensure that. Do they?” The record is mixed, of course, but I’m not convinced that science and conventional politics interact terribly well. The paradox of science is that its long term impacts may be very large, but in the short term there are always more urgent matters to deal with, and it is these issues, healthcare or economics, for example, that will decide elections. The elected politicians nominally in charge of public science budgets typically have many other responsibilities too, and their attention is often diverted by more immediate problems.

She goes on to ask “How about private research funds: can they pursue goals not supported by the majority? We don’t want a system where the public votes on how private science dollars are spent, do we?” In a way she then goes on to start to answer her own question “Unless they are violating a specific law, presumably” – there are some goals of science that in most countries are outlawed, regardless of who is funding the work, most notably human reproductive cloning. But there are some interesting discussions to be had about less extreme cases. One of the major sources of private science dollars are the charitable foundations, such as the UK’s Wellcome Foundation, which has £13 billion to play with, and the $33 billion of the Bill and Melinda Gates Foundation. One could certainly imagine in principle a situation in which a foundation pursued a goal with only minority support, but in practise the big foundations seem to be commendably sensitive to public concerns, more so in many ways than government agencies.

Much applied science is done by public companies, and there it is the shareholders who have an obvious interest and responsibility. It’s interesting, for example, that in the UK one of the major driving forces behind the development of a “Responsible NanoCode” for business is a major asset manager, which manages investments in public companies by institutions such as pension funds and insurance companies. There is considerable less clarity, of course, in the case of companies owned by venture capital and private equity, and these could be involved in research that may well turn out to be very controversial (one thinks, for example, of Synthetic Genomics, the company associated with Craig Venter which aims to commercialise synthetic biology). Irrespective of their ownership structure, the mechanisms of the market mean that companies can’t afford to ignore public opinion. There’s a tension, of course, between the idea that the market provides a sensitive mechanism by which the wants and needs of the public are met by private enterprise, and the view that companies have become adept at creating new consumer wants and desires, sometimes against the better interests both of the consumers themselves and wider society. The Nanodialogues project reports a very interesting public engagement exercise with a multinational consumer products company that explores this tension.

What isn’t in doubt is that global science and technology can seem a complex, unpredictable and perhaps uncontrollable force. The science fiction writer William Gibson puts this well in a recent interview: “I think what scares people most about new technologies — it’s actually what scares me most — is that they’re never legislated into being. Congress doesn’t vote on the cellular telephony initiative and create a cellphone system across the United States and the world. It just happens and capital flows around and it changes things at the most intimate levels of our lives, but we never decided to do it. Somewhere now there’s a team of people working on something that’s going to profoundly impact your life in the next 10 years and change everything. You don’t know what it is and they don’t know how it’s going to change your life because usually these things don’t go as predicted.”

Nanomechanical computers

A report on the BBC News website yesterday – Antique engines inspire nano chip – discussed a new computer design based on the use of nanoscale mechanical elements, which it described as being inspired by the Victorian grandeur of Babbage’s difference engine. The work referred to comes from the laboratory of Robert Blick of the University of Wisconsin, and is published in the New Journal of Physics as A nanomechanical computer—exploring new avenues of computing (free access).

Talk of nanoscale mechanical computers and Babbage’s machine inevitably makes one think of Eric Drexler’s proposals for nanocomputers based on rod logic. However, the operating principles underlying Blick’s proposals are rather different. The basic element is a nanoelectromechanical single electron transistor (a NEMSET, see illustration below). This consists of a silicon nano-post, which oscillates between two electrodes, shuttling electrons between the source and the drain (see also Silicon nanopillars for mechanical single-electron transport (PDF)). The current is a strong function of the applied frequency, because when the post is in mechanical resonance it carries many more electrons across the gap, and the paper demonstrates how coupled NEMSETS can be used to implement logical operations.

Blick stresses that the speed of operation of these mechanical logic gates is not competitive with conventional electronics; the selling points are instead the ability to run at higher temperature (particularly if they were to be fabricated from diamond) and their lower power consumption.

Readers may be interested in Blick’s web-site nanomachines.com, which demonstrates a number of other interesting potential applications for nanostructures fabricated by top-down methods.

The nanoelectromechanical single electron transistor
A nano-electromechanical single electron transistors (NEMSET). From Blick et al., New J. Phys. 9 (2007) 241.

Save the planet by insulating your house

A surprisingly large fraction of the energy used in developed countries is used heating and lighting buildings – in the European Union 40% of energy used is in buildings. This is an obvious place to look for savings if one is trying to reduce energy consumption without compromising economic activity. A few weeks ago, I reported a talk by Colin Humphreys explaining how much energy could be saved by replacing conventional lighting by light emitting diodes. A recent report commissioned by the UK Government’s Department for Environment, Food and Rural Affairs, Environmentally beneficial nanotechnology – Barriers and Opportunities (PDF file) ranks building insulation as one of the areas in which nanotechnology could make a substantial and immediate contribution to saving energy.

The problem doesn’t arise so much from new buildings; current building regulations in the UK and the EU are quite strict, and the technologies for making very heat efficient buildings are fairly well understood, even if they aren’t always used to the full. It is the existing building stock that is the problem. My own house illustrates this very well; its 3 foot thick solid limestone walls look as handsome and sturdy as when they were built 150 years ago, but the absence of a cavity makes them very poor insulators. To bring them up to modern insulating standards I’d need to dryline the walls with plasterboard with a foam-filled cavity, at a thickness that would lose a significant amount of the interior volume of the rooms. Is their some magic nanotechnology enabled solution that would allow us to retrofit proper insulation to the existing housing stock in an acceptable way?

The claims made by manufacturers of various products in this area are not always crystal clear, so its worth reminding ourself of the basic physics. Heat is transferred by convection, conduction and radiation. Stopping convection is essentially a matter of controlling the drafts. The amount of heat transmitted by conduction is proportional to the difference of temperature, the thickness of the material, and a material constant called the thermal conductivity. For solids like brick, concrete and glass thermal conductivities are around 0.6 – 0.8 W/m.K. As everyone knows, still air is a very good thermal insulator, with a thermal conductivity of 0.024 W/m.K, and the goal of traditional insulation materials, from sheeps’ wool to plastic foam, is to trap air to exploit its insulating properties. Standard building insulation is made from materials like polyurethane foam, are actually pretty good. A typical commercial product has a value of thermal conductivity of 0.021 W/m.K; it manages to do a bit better than pure air because the holes in the foam are actually filled with a gas that is heavier than air.

The best known thermal insulators are the fascinating materials known as aerogels. These are incredibly diffuse foams – their densities can be as low as 2 mg/cm3, not much more than air – that resemble nothing as much as solidified smoke. One makes an aerogel by making a cross-linked gel (typically from water soluble polymers of silica) and then drying it above the critical point of the solvent, preserving the structure of the gel in which the strands are essentially single molecules. An aerogel can have a thermal conductivity around 0.008 W/m.K. This is substantially less than the conductivity of the air it traps, essentially because the nanscale strands of material disrupt the transport of the gas molecules.

Aerogels have been known for a long time, mostly as a laboratory curiousity, with some applications in space where their outstanding properties have justified their very high expense. But it seems that there have been some significant process improvements that have brought the price down to a point where one could envisage using them in the building trade. One of the companies active in this area is the US-based Aspen Aerogels, which markets sheets of aerogel made, for strength, in a fabric matrix. These have a thermal conductivity in the range 0.012 – 0.015 W/m.K. This represents a worthwhile improvement on the standard PU foams. However, one shouldn’t overstate its impact; this means to achieve a given level of thermal insulation one needs an insulating sheet a bit more than half the thickness of a standard material.

Another product, from a company called Industrial Nanotech Inc, looks more radical in its impact. This is essentially an insulating paint; the makers claim that three layers of this material – Nansulate will provide significant insulation. If true, this would be very important, as it would easily and cheaply solve the problem of retrofitting insulation to the existing housing stock. So, is the claim plausible?

The company’s website gives little in the way of detail, either of the composition of the product or, in quantitative terms, its effectiveness as an insulator. The active ingredient is referred to as “hydro-NM-Oxide”, a term not well known in science. However, a recent patent filed by the inventor gives us some clues. US patent 7,144,522 discloses an insulating coating consisting of aerogel particles in a paint matrix. This has a thermal conductivity of 0.104 W/m.K. This is probably pretty good for a paint, but it is quite a lot worse than typical insulating foams. What, of course, makes matters much worse is that as a paint it will be applied as a very thin film (the recommended procedure is to use three coats, giving a dry thickness of 7.5 mils, a little less than 0.2 millimeters. Since one needs a thickness of at least 70 millimeters of polyurethane foam to achieve an acceptable value of thermal insulation (U value of 0.35 W/m2.K) it’s difficult to see how a layer that is both 350 times thinner than this, and with a significantly higher value of thermal conductivity, could make a significant contribution to the thermal insulation of a building.

More on synthetic biology and nanotechnology

There’s a lot of interesting recent commentary about synthetic biology on Homunculus, the consistently interesting blog of the science writer Philip Ball. There’s lots more detail about the story of the first bacterial genome transplant that I referred to in my last post; his commentary on the story was published last week as a Nature News and Views article (subscription required).

Philip Ball was a participant in a recent symposium organised by the Kavli Foundation “The merging of bio and nano: towards cyborg cells”. The participants in this produced an interesting statement: A vision for the convergence of synthetic biology and nanotechnology. The signatories to this statement include some very eminent figures both from synthetic biology and from bionanotechnology, including Cees Dekker, Angela Belcher, Stephen Chu and John Glass. Although the statement is bullish on the potential of synthetic biology for addressing problems such as renewable energy and medicine, it is considerably more nuanced than the sorts of statements reported by the recent New York Times article.

The case for a linkage between synthetic biology and bionanotechnology is well made at the outset: “Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.” The writers divide the enabling technologies for synthetic biology into hardware and software. For this perspective on synthetic biology, which concentrates on the idea of reprogramming existing cells with synthetic genomes, the crucial hardware is the capability for cheap, accurate DNA synthesis, about which they write: “The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. “ This, of course, also has implications for the use of DNA as a building block for designed nanostructures and devices (see here for an example).

The authors are much more cautious on the software side. “Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. “

The new new thing

It’s fairly clear that nanotechnology is no longer the new new thing. A recent story in Business Week – Nanotech Disappoints in Europe – is not atypical. It takes its lead from the recent difficulties of the UK nanotech company Oxonica, which it describes as emblematic of the nanotechnology sector as a whole: “a story of early promise, huge hype, and dashed hopes.” Meanwhile, in the slightly neophilic world of the think-tanks, one detects the onset of a certain boredom with the subject. For example, Jack Stilgoe writes on the Demos blog “We have had huge fun running around in the nanoworld for the last three years. But there is a sense that, as the term ‘nanotechnology’ becomes less and less useful for describing the diversity of science that is being done, interesting challenges lie elsewhere… But where?”

Where indeed? A strong candidate for the next new new thing is surely synthetic biology. (This will not, of course, be new to regular Soft Machines readers, who will have read about it here two years ago). An article in the New York Times at the weekend gives a good summary of some of the claims. The trigger for the recent prominence of synthetic biology in the news is probably the recent announcement from the Craig Venter Institute of the first bacterial genome transplant. This refers to an advance paper in Science (abstract, subscription required for full article) by John Glass and coworkers. There are some interesting observations on this in a commentary (subscription required) in Science. It’s clear that much remains to be clarified about this experiment: “But the advance remains somewhat mysterious. Glass says he doesn’t fully understand why the genome transplant succeeded, and it’s not clear how applicable their technique will be to other microbes. “ The commentary from other scientists is interesting: “Microbial geneticist Antoine Danchin of the Pasteur Institute in Paris calls the experiment “an exceptional technical feat.” Yet, he laments, “many controls are missing.” And that has prevented Glass’s team, as well as independent scientists, from truly understanding how the introduced DNA takes over the host cell.”

The technical challenges of this new field haven’t prevented activists from drawing attention to its potential downsides. Those veterans of anti-nanotechnology campaigning, the ETC group, have issued a report on synthetic biology, Extreme Genetic Engineering, noting that “Today, scientists aren’t just mapping genomes and manipulating genes, they’re building life from scratch – and they’re doing it in the absence of societal debate and regulatory oversight”. Meanwhile, the Royal Society has issued a call for views on the subject.

Looking again at the NY Times article, one can perhaps detect some interesting parallels with the way the earlier nanotechnology debate unfolded. We see, for example, some fairly unrealistic expectations being raised: ““Grow a house” is on the to-do list of the M.I.T. Synthetic Biology Working Group, presumably meaning that an acorn might be reprogrammed to generate walls, oak floors and a roof instead of the usual trunk and branches. “Take over Mars. And then Venus. And then Earth” —the last items on this modest agenda.” And just as the radical predictions of nanotechnology were underpinned by what were in my view inappropriate analogies with mechanical engineering, much of the talk in synthetic biology is underpinned by explicit, but as yet unproven, parallels between cell biology and computer science: “Most people in synthetic biology are engineers who have invaded genetics. They have brought with them a vocabulary derived from circuit design and software development that they seek to impose on the softer substance of biology. They talk of modules — meaning networks of genes assembled to perform some standard function — and of “booting up” a cell with new DNA-based instructions, much the way someone gets a computer going.”

It will be interesting how the field of synthetic biology develops, to see whether it does a better of job of steering between overpromised benefits and overdramatised fears than nanotechnology arguably did. Meanwhile, nanotechnology won’t be going away. Even the sceptical Business Week article concluded that better times lay ahead as the focus in commercialising nanotechnology moved from simple applications of nanoparticles to more sophisticated applications of nanoscale devices: “Potentially even more important is the upcoming shift from nanotech materials to applications—especially in health care and pharmaceuticals. These are fields where Europe is historically strong and already has sophisticated business networks. “