Nanoparticles down the drain

With significant amounts of nanomaterials now entering markets, it’s clearly worth worrying about what’s going to happen these materials after disposal – is there any danger of them entering the environment and causing damage to ecosystems? These are the concerns of the discipline of nano-ecotoxicology; on the evidence of the conference I was at yesterday, on the Environmental effects of nanoparticles, at Birmingham, this is an expanding field.

From the range of talks and posters, there seems to be a heavy focus (at least in Europe) on those few nanomaterials which really are entering the marketplace in quantity – titanium dioxide, of sunscreen fame, and nano-silver, with some work on fullerenes. One talk, by Andrew Johnson, of the UK’s Centre for Ecology and Hydrology at Wallingford, showed nicely what the outline of a comprehensive analysis of the environmental fate of nanoparticles might look like. His estimate is that 130 tonnes of nano-titanium dioxide a year is used in sunscreens in the UK – where does this stuff ultimately go? Down the drain and into the sewers, of course, so it’s worth worrying what happens to it then.

At the sewage plant, solids are separated from the treated water, and the first thing to ask is where the titanium dioxide nanoparticles go. The evidence seems to be that a large majority end up in the sludge. Some 57% of this treated sludge is spread on farmland as fertilizer, while 21% is incinerated and 17% goes to landfill. There’s work to be done, then, in determining what happens to the nanoparticles – do they retain their nanoparticulate identity, or do they aggregate into larger clusters? One needs then to ask whether those that survive are likely to cause damage to soil microorganisms or earthworms. Johnson presented some reassuring evidence about earthworms, but there’s clearly more work to be done here.

Making a series of heroic assumptions, Johnson made some estimates of how many nanoparticles might end up in the river. Taking a worst case scenario, with a drought and heatwave in the southeast of England (they do happen, I’m old enough to remember) he came up with an estimate of 8 micrograms/litre in the Thames, which is still more than an order of magnitude less than that that has been shown to start to affect, for example, rainbow trout. This is reassuring, but, as one questioner pointed out, one still might worry about the nanoparticles accumulating in sediments to the detriment of filter feeders.

The mis-measure of uncertainty

A couple of pieces in the Financial Times today and yesterday offer some food for thought about the problems of commercialising scientific research. Yesterday’s piece – Drug research needs serendipity (free registration may be required) – concentrates on the pharmaceutical sector, but its observations are more widely applicable. Musing on the current problems of big pharma, with their dwindling pipelines of new drugs, David Shaywitz and Nassim Taleb (author of The Black Swan), identify the problem as a failure to deal with uncertainty; “academic researchers underestimated the fragility of their scientific knowledge while pharmaceuticals executives overestimated their ability to domesticate scientific research.”

They identify two types of uncertainty; there’s the negative uncertainty of all the things that can go wrong as one tries to move from medical research to treatments. Underlying this is the simple fact that we know much less about human biology, in all its complexity, than one might think from all the positive headlines and press releases. It’s in response to this negative uncertainty that managers have attempted to impose more structure and focus to make the outcome of research more predictable. But why this is generally in vain? “Answer: spreadsheets are easy; science is hard.” According to Shaywitz and Taleb, this approach isn’t just doomed to fail on its on terms, it’s positively counterproductive. This is because it doesn’t leave any room for another type of uncertainty: the positive uncertainty of unexpected discoveries and happy accidents.

Their solution is to embrace the trend we’re already seeing, for big Pharma to outsource more and more of its functions, lowering the barriers to entry and leaving room for “a lean, agile organisation able to capture, consider and rapidly develop the best scientific ideas in a wide range of disease areas and aggressively guide these towards the clinic.”

But how are things for the small and agile companies that are going to be driving innovation in this new environment? Not great, says Jonathan Guthrie in today’s FT, but nonetheless “There is hope yet for science park toilers”. The article considers, from a UK perspective, the problems small technology companies are having raising money from venture capitalists. It starts from the position that the problem isn’t shortage of money but shortage of good ideas; perhaps not the end of the age of innovation, but a temporary lull after the excitement of personal computers, the internet and mobile phones. And, for the part of the problem that lies with venture capitalists, misreading this cycle has contributed to their difficulties. In the wake of the technology bubble, venture capital returns aren’t a good advertisement for would-be investors at the moment – “funds set up after 1996 have typically lost 1.4 per cent a year over five years and 1.8 per cent over 10 years, says the British Private Equity and Venture Capital Association.” All is not lost, Guthrie thinks – as the memory of the dotbomb debacles fade the spectacular returns enjoyed by the most successful technology start-ups will attract money back into the sector. Where will the advances take place? Not in nanotechnology, at least in the form of the nanomaterials sector as it has been understood up to now: “materials scientists have engineered a UK nanotechnology sector so tiny it is virtually invisible.” Instead Guthrie points to renewable energy and power saving systems.

USA lagging Europe in nanotechnology risk research

How much resource is being devoted to assessing the potential risks of the nanotechnologies that are currently at or close to market? Not nearly enough, say campaigning groups, while governments, on the other hand, release impressive sounding figures for their research spend. Most recently, the USA’s National Nanotechnology Initiative has estimated its 2006 spend on nano-safety research as $68 million, which sounds very impressive. However, according to Andrew Maynard, a leading nano-risk researcher based at the Woodrow Wilson Center in Washington DC, we shouldn’t take this figure at face value.

Maynard comments on the figure on the SafeNano blog, referring to an analysis recently done by him and described in a news release from the Woodrow Wilson Center’s Project on Emerging Nanotechnologies. It seems that this figure is obtained by adding up all sorts of basic nanotechnology research, some of which might have only tangential relevance to problems of risk. If one applies a tighter definition of research that is either highly relevant to nanotechnology risk – such as a direct toxicology study – or substantially relevant -such as a study of the fate in the body of medical nanoparticles – it seems that the numbers fall substantially. Only $13 million of the $68 million was highly relevant to nanotechnology risk, with this number increasing to $29 million if the substantially relevant category is included too. This compares unfavourably with European spending, which amounts to $24 million in the highly relevant category alone.

Of course, it isn’t the headline figure that matters; what’s important is whether the research is relevant to the actual and potential risks that are out there. The Project on Emerging Nanotechnologies has done a great service by compiling an international inventory of nanotechnology risk research which allows one to see clearly just what sort of risk research is being funded across the world. It’s clear from this that suggestions that nanotechnology is being commercialised with no risk research at all being done are wide of the mark; what requires further analysis is whether all the right research is being done.

The Tata Nano

The Tata Nano – the newly announced one lakh (100,000 rupees) car from India’s Tata group – hasn’t got a lot to do with nanotechnology (see this somewhat bemused and bemusing piece from the BBC), but since it raises some interesting issues I’ll use the name as an excuse to discuss it here.

The extensive media coverage in the Western media has been characterised by some fairly outrageous hypocrisy – for example, the UK’s Independent newspaper wonders “Can the world afford the Tata Nano?” (The answer, of course, is that what the world can’t afford are the much bigger cars parked outside all those prosperous Independent readers’ houses). With a visit to India fresh in my mind, it’s completely obvious to me why all those families one sees precariously perched on motor-bikes would want a small, cheap, economical car, and not at all obvious that those of us in the West, who are used to enjoying on average 11 times (for the UK) or 23 times (for the USA) more energy per head than the Indians, have any right to complain about the extra carbon dioxide emissions that will result. It’s almost certainly true that the world couldn’t sustain a situation in which all its 6.6 billion population used as much energy as the Americans and Europeans; the way that equation will be squared, though, ultimately must be by the rich countries getting by with less energy rather than by poorer countries being denied the opportunity to use more. It is to be hoped that this transformation takes place in a way that uses better technology to achieve the same or better living standards for everybody from a lot less energy; the probable alternative is the economic disruption and widespread involuntary cuts in living standards that will follow from a prolonged imbalance of energy supply and demand.

A more interesting question to ask about the Tata Nano is to wonder why it was not possible to leapfrog current technology to achieve something even more economical and sustainable – using, one hesitates to suggest, actual nanotechnology? Why is the Nano made from old-fashioned steel, with an internal combustion engine in the back, rather than, say, being made from advanced lightweight composites and propelled by an electric motor and a hydrogen fuel cell? The answers are actually fairly clear – because of cost, the technological capacity of this (or any other) company, and the requirement for maintainability. Aside from these questions, there’s the problem of infrastructure. The problems of creating an infrastructure for hydrogen as a fuel are huge for any country; liquid hydrocarbons are a very convenient store of energy, and, old though it is, the internal combustion engine is a pretty effective and robust device for converting energy. Of course, we can hope that new technologies will lead to new versions of the Tata Nano and similar cars of far greater efficiency, though realism demands that we understand the need for new technology to fit into existing techno-social systems to be viable.

Nanotechnology in Korea

One of my engagements in a slightly frantic period last week was to go to a UK-Korea meeting on collaboration in nanotechnology. This had some talks which gave a valuable insight into how the future of nanotechnology is seen in Korea. It’s clearly seen as central to their program of science and technology; according to some slightly out-of-date figures I have to hand about government spending on nanotechnology, Korea ranks 5th, after the USA, Japan, Germany and France, and somewhat ahead of the UK. Dr Hanjo Lim, of the Korea Science and Engineering Foundation, gave a particularly useful overview.

He starts out by identifying the different ways in which going small helps. Nanotechnology exploits a confluence of 3 types of benefits – nanomaterials exploit surface matter, in which benefits arise from their high surface to volume ratio, with most obvious benefits from catalysis. They exploit quantum matter, size dependent quantum effects that are so important for band gap engineering and making quantum dots. And they can exploit soft matter, which is so important for the bio-nano interface. As far as Korea is concerned, as a small country with a well-developed industrial base, he sees four important areas. Applications in information and communication technology will obviously directly impact the strong position Korea has in the semiconductor industry and the display industry, as well as having an impact on automobiles. Robots and ubiquitous devices play to Korea’s general comparative advantage in manufacturing, but applications in Nano foods and medical science are relatively weak in Korea at the moment. Finally, the environmentally important applications in Fuel/solar cells, air and water treatments will be of growing importance in Korea, as everywhere else.

Korea ranks 4th or 5th in the world in terms of nano-patents; the plan is, up to 2010, to expand existing strength in nanotechnology and industrialise this by developing technology specific to applications. Beyond that the emphasis will be on systems level integration and commercialisation of those developments. Clearly, in electronics we are already in the nano- era. Korea has a dominant position in flash memory, where Hwang’s law – that memory density doubles every year – represents a more aggressive scaling than Moore’s law. To maintain this will require perhaps carbon nanotubes or silicon nanowires. Lim finds nanotubes very attractive but given the need for control of chirality and position his prediction is that this is still more than 10 years until commercialisation. An area that he thinks will grow in importance is the integration of optical interconnects in electronics. This, in his view, will be driven by the speed and heat issues in CPU that arise from metal interconnects – he reminds us that a typical CPU has 10 km of electrical wire, so it’s no wonder that heat generation is a big problem, and Google’s data centres come equipped with 5 story cooling towers. Nanophotonics will enable integration of photonic components within silicon multi-chip CPUs – but the problem that silicon is not good for lasers will have to be overcome. Either lasers off the chip will have to be used, or silicon laser diodes developed. His prognosis is, recognising that we have box to box optical interconnects now, and board to board interconnects are coming, that we will have chip to chip intercoonnects on the 1 – 10 cm scale by 2010, with intrachip connects by 2010-2015.

Anyone interested in more general questions of the way the Korean innovation system is developing will find much to interest them in a recent Demos pamphlet: Korea: Mass innovation comes of age. Meanwhile, I’ll be soon reporting on nanotechnology in another part of Asia; I’m writing this from Bangalore/Bengalooru in India, where I will be talking tomorrow at Bangalore Nano 2007.

Giant magnetoresistance – from the iPod to the Nobel Prize

This year’s Nobel Prize for Physics, it was announced today, has been awarded to Albert Fert, from Orsay, Paris, and Peter Grünberg, from the Jülich research centre in Germany, for their discovery of giant magnetoresistance, an effect whereby a structure of layers of alternating magnetic and non-magnetic materials, each only a few atoms thick, has an electrical resistance that is very strongly changed by the presence of a magnetic field.

The discovery was made in 1988, and at first seemed an interesting but obscure piece of solid state physics. But very soon it was realised that this effect would make it possible to make very sensitive magnetic read heads for hard disks. On a hard disk drive, information is stored as tiny patterns of magnetisation. The higher the density of information one is trying to store on a hard drive, the weaker the resulting magnetic field, and so the more sensitive the read head needs to be. The new technology was launched onto the market in 1997, and it is this technology that has made possible the ultra-high density disk drives that are used in MP3 players and digital video recorders, as well as in laptops.

The rapidity with which this discovery was commercialised is remarkable. One probably can’t rely on this happening very often, but this is a salutory reminder that sometimes discoveries can move from the laboratory to a truly industry-disrupting product very quickly indeed, if the right application can be found, and if the underlying technology (in this case the nanotechnology required for making highly uniform films only a few atoms thick) is in place.

Towards the $1000 human genome

It currently costs about a million dollars to sequence an individual human genome. One can expect incremental improvements in current technology to drop this price to around $100,000, but the need that current methods have to amplify the DNA will make it difficult for this price to drop further. So, to meet a widely publicised target of a $1000 genome a fundamentally different technology is needed. One very promising approach uses the idea of threading a single DNA molecule through a nanopore in a membrane, and identifiying each base by changes in the ion current flowing through the pore. I wrote about this a couple of years ago, and a talk I heard yesterday from one of the leaders in the field prompts me to give an update.

The original idea for this came from David Deamer and Dan Branton, who filed a patent for the general scheme in 1998. Hagan Bayley, from Oxford, whose talk I heard yesterday, has been collaborating with Reza Ghadiri from Scripps, to implement this scheme using a naturally occuring pore forming protein, alpha-hemolysin, as the reader.

The key issues are the need to get resolution at a single base level, and the correct identification of the bases. They get extra selectivity by a combination of modification of the pore by genetic engineering, and insertion into the pore of small ring molecules – cyclodextrins. At the moment speed of reading is a problem – when the molecules are pulled through by an electric field they tend to go a little too fast. But, in an alternative scheme in which bases are chopped off the chain one by one and dropped into the pore sequentially, they are able to identify individual bases reliably.

Given that the human genome has about 6 million bases, they estimate that at 1 millisecond reading time per base they’ll need to use 1000 pores in parallel to sequence a genome in under a day (taking into account the need for a certain amount of redundancy for error correction). To prepare the way for commercialisation of this technology, they have a start-up company – Oxford NanoLabs – which is working on making a miniaturised and rugged device, about the size of a palm-top computer, to do this kind of analysis.

Stochastic sensor
Schematic of a DNA reader using the pore forming protein alpha-hemolysin. As the molecule is pulled through the pore, the ionic conduction through the pore varies, giving a readout of the sequence of bases. From the website of the Theoretical and Computational Biophysics group at the University of Illinois at Urbana-Champaign.

Three good reasons to do nanotechnology: 2. For healthcare and medical applications

Part 1 of this series of posts dealt with applications of nanotechnology for sustainable energy. Here I go on to describe why so many people are excited about the possibilities for applying nanotechnology in medicine and healthcare.

It should be no surprise that medical applications of nanotechnology are very prominent in many people’s research agenda. Despite near universal agreement about the desirablility of more medical research, though, there are some tensions in the different visions people have of future nanomedicine. To the general public the driving force is often the very personal experience most people have of illness in themselves or people close to them, and there’s a lot of public support for more work aimed at the well known killers of western world, such as cardiovascular disease, cancer, and degenerative diseases like Alzheimer’s and Parkinson’s. Economic factors, though, are important for those responsible for supplying healthcare, whether that’s the government or a private sector insurer. Maybe it’s a slight exaggeration to say that the policy makers’ ideal would be for people to live in perfect health until they were 85 and then tidily drop dead, but it’s certainly true that the prospect of an ageing population demanding more and more expensive nursing care is one that is exercising policy-makers in a number of prosperous countries. In the developing world, there are many essentially political and economic issues which stand in the way of people being able to enjoy the levels of health we take for granted in Europe and the USA, and matters like the universal provision of clean water are very important. Important though the politics of public health is, the diseases that blight developing world, such as AIDS, tuberculosis and malaria, still present major science challenges. Finally, back in the richest countries of the world, there’s a climate of higher expectations of medicine, where people look to medicine to do more than to fix obvious physical ailments, and to move into the realm of human enhancement and prolonging of life beyond what might formerly be regarded as a “natural” lifespan.

So how can nanotechnology help? There are three broad areas.

1. Therapeutic applications of nanotechnology. An important area of focus for medical applications of nanotechnology has been in the area of drug delivery. This begins from the observation that when a patient takes a conventionally delivered drug, an overwhelmingly large proportion of the administered drug molecules don’t end up acting on the biological systems that they are designed to affect. This is a serious problem if the drug has side effects; the larger the dose that has to be administered to be sure that some of the molecule actually gets to the place where it is needed, the worse these side-effects will be. This is particularly obvious, and harrowing, for the intrinsically toxic molecules the drugs used for cancer chemotherapy. Another important driving force for improving delivery mechanisms is the fact that, rather than the simple and relatively robust small molecules that have been the main active ingredients in drugs to date, we are turning increasingly to biological molecules like proteins (such as monoclonal antibodies) and nucleic acids (for example, DNA for gene therapy and small interfering RNAs). These allow very specific interventions into biological processes, but the molecules are delicate, and are easily recognised and destroyed in the body. To deliver a drug, current approaches include attaching it to a large water soluble polymer molecule which is essentially invisible to the body, or wrapping it up in a self-assembled nanoscale bag – a liposome – formed from soap like molecules like phospholipids or block copolymers. Attaching the drug to a dendrimer – a nanoscale treelike structure which may have a cavity in its centre – is conceptually midway between these two approaches. The current examples of drug delivery devices that have made it into clinical use are fairly crude, but future generations of drug delivery vehicles can be expected to include “stealth” coatings to make them less visible to the body, mechanisms for targeting them to their destination tissue or organ and mechanisms for releasing their payload when they get there. They may also incorporate systems for reporting their progress back to the outside world, even if this is only the passive device of containing some agent that shows up strongly in a medical scanner.

Another area of therapeutics in which nanotechnology can make an impact is in tissue engineering and regenerative medicine. Here it’s not so much a question of making artificial substitutes for tissues or organs; ideally it would be in providing the environment in which a patient’s own cells would develop in such a way as to generate new tissue. This is a question of persuading those cells to differentiate to take up the specialised form of a particular organ. Our cells are social organisms, which respond to chemical and physical signals as they develop and differentiate to produce tissues and organs, and the role of nanotechnology here is to provide an environment (or scaffold) which gives the cells the right physical and chemical signals. Once again, self-assembly is one way forward here, providing soft gels which can be tagged with the right chemical signals to persuade the cells to do the right thing.

2. Diagnostics. Many disease states manifest themselves by the presence of specific molecules, so the ability to detect and identify these molecules quickly and reliably, even when they are present at very low concentrations, would be very helpful for the rapid diagnosis of many different conditions. The relevance of nanotechnology is that many of the most sensitive ways of detecting molecules rely on interactions between the molecule and a specially prepared surface; the much greater importance of the surface relative to the bulk for nanostructured materials makes it possible to make sensors of great sensitivity. Sensors for the levels of relatively simple chemicals, such as glucose or thyroxine, could be integrated with devices that release the chemicals needed to rectify any imbalances (these integrated devices go by the dreadful neologism of “theranostics”); recognising pathogens by recognising stretches of DNA would give a powerful way of identifying infectious diseases without the necessity for time-consuming and expensive culturing steps. One obvious and much pursued goal would be to find a way of reading, at a single molecule level, a whole DNA sequence, making it possible cheaply to obtain an individual’s whole genome.

3. Innovation and biomedical research. A contrarian point of view, which I’ve heard frequently and forcibly expressed by a senior figure from the UK’s pharmaceutical industry, is that the emphasis in nanomedicine on drug delivery is misguided, because fundamentally what it represents is an attempt to rescue bad drug candidates. In this view the place to apply nanotechnology is the drug discovery process itself. It’s a cause for concern for the industry that it seems to be getting harder and more expensive to find new drug candidates, and the hopes that were pinned a few years ago on the use of large scale combinatorial methods don’t seem to be working out. In this view, there should be a move away from these brute force approaches to more rational methods, but this time informed by the very detailed insights into cell biology offered by the single molecule methods of bionanotechnology.

Nanotechnology and visions of the future (part 2)

This is the second part of an article I was asked to write to explain nanotechnology and the debates surrounding it to a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. The first installment can be read here.


There are many debates about nanotechnology; what it is, what it will make possible, and what its dangers might be. On one level these may seem to be very technical in nature. So a question about whether a Drexler style assembler is technically feasible can rapidly descend into details of surface chemistry, while issues about the possible toxicity of carbon nanotubes turn on the procedures for reliable toxicological screening. But it’s at least arguable that the focus on the technical obscures the real causes of the argument, which are actually based on clashes of ideology. What are the ideological divisions that underly debates about nanotechnology?

Underlying the most radical visons of nanotechnology is an equally radical ideology – transhumanism. The basis of this movement is a teleological view of human progress which views technology as the vehicle, not just for the improvement of the lot of humanity, but for the transcendence of those limitations that non-transhumanists would consider to be an inevitable part of the human condition. The most pressing of these limitations, is of course death, so transhumanists look forward to nanotechnology providing a permanent solution to this problem. In the first instance, this will be effected by nanomedicine, which they anticipate as making cell-by-cell repairs to any damage possible. Beyond this, some transhumanists believe that computers of such power will become available that they will constitute true artificial intelligence. At this point, they imagine a merging of human and machine intelligence, in a way that would effectively constitute the evolution of a new and improved version of humankind.

The notion that the pace of technological change is continually accelerating is an article of faith amongst transhumanists. This leads to the idea that this accelerating rate of change will lead to a point beyond which the future is literally inconceivable. This point they refer to as “the singularity”, and discussions of this hypothetical event take on a highly eschatological tone. This is captured in science fiction writer Cory Doctorow’s dismissive but apt phrase for the singularity: “the rapture of the nerds”.

This worldview carries with it the implication that an accelerating pace of innovation is not just a historical fact, but also a moral imperative. This is because it is through technology that humanity will achieve its destiny, which is nothing less that to transcend its own current physical and mental limitations. The achievement of radical nanotechnology is central to this project, and for this reason transhumanists tend to share a strong conviction not only that radical nanotechnology along Drexlerian lines is possible, but also that its development is morally necessary.

Transhumanism can be considered to be the extreme limit of views that combine strong technological determinism with a highly progressive view of the development of humanity. It is a worldwide movement, but it’s probably fair to say that its natural home is California, its main constituency is amongst those involved in information technology, and it is associated predominantly, if not exclusively, with a strongly libertarian streak of politics, though paradoxically not dissimilar views seem to be attractive to a certain class of former Marxists.

Given that transhumanism as an ideology does not seem to have a great deal of mass appeal, it’s tempting to underplay its importance. This may be a mistake; amongst its adherents are a number of figures with very high media profiles, particularly in the United States, and transhumanist ideas have entered mass culture through science fiction, films and video games. Certainly some conservative and religious figures have felt threatened enough to express some alarm, notably Francis Fukuyama, who has described transhumanism as “the world’s most dangerous idea”.

Global capitalism and the changing innovation landscape
If it is the radical futurism of the transhumanists that has put nanotechnology into popular culture, it is the prospect of money that has excited business and government. Nanotechnology is seen by many worldwide as the major driver of economic growth over the next twenty years, filling the role that information technology has filled over the last twenty years. Breathless projections of huge new markets are commonplace, with the prediction by the US National Nanotechnology Initiative of a trillion dollar market for nanotechnology products by 2015 being the most notorious of these. It is this kind of market projection that underlies a worldwide spending boom on nanotechnology research, which encompasses both the established science and technology powerhouses like the USA, Germany and Japan, but also fast developing countries like China and India.

The emergence of nanotechnology has corresponded with some other interesting changes in the commercial landscape in technologically intensive sectors of the economy. The types of incremental nanotechnology that have been successfully commercialised so far have involved nanoparticles, such as the ones used in sunscreens, or coatings, of the kind used in stain-resistant fabrics. This sort of innovation is the province of the speciality chemicals sector, and one cynical view of the prominence of the nanotechnology label amongst new and old companies is that it has allowed companies in this rather unfashionable sector of the market to rebrand themselves as being part of the newest new thing, with correspondingly higher stock market valuations and easier access to capital. On the other hand, this does perhaps signal a more general change in the way science-driven innovations reach the market.

Many of the large industrial conglomerates that were such a prominent parts of the industrial landscape in Western countries up to the 1980s have been broken up or drastically shrunken. Arguably, the monopoly rents that sustained these combines were what made possible the very large and productive corporate laboratories that were the source of much innovation at that time. This has been replaced by a much more fluid scene in which many functions of companies, including research and innovation, have been outsourced. In this landscape, one finds nanotechnology companies like Oxonica, which are essentially holding companies for intellectual property, with functions that in the past would have been regarded as of core importance, such as manufacturing and marketing, outsourced to contractors, often located in different countries.

Even the remaining large companies have embraced the concept of “open innovation”, in which research and development is regarded as a commodity to be purchased on the open market (and, indeed, outsourced to low cost countries) rather than a core function of the corporation. It is in this light that one should understand the new prominence of intellectual property as something fungible and readily monetised. Universities and other public research institutes, strongly encouraged to seek new sources of funding other than direct government support, have made increasing efforts to spin-out new companies based on intellectual property developed by academic researchers.

In the light of all this, it’s easy to see nanotechnology as one aspect of a more general shift to what the social scientist Michael Gibbons has called Mode II knowledge production[4]. In this view, traditional academic values are being eclipsed by a move to more explicitly goal-oriented and highly interdisciplinary research, in which research priorities are set not by the values of the traditional disciplines, but by perceived market needs and opportunities. It is clear that this transition has been underway for some time in the life sciences, and in this view the emergence of nanotechnology can be seen as a spread of these values to the physical sciences.

Environmentalist opposition
In the UK at least, the opposition to nanotechnology has been spearheaded by two unlikely bedfellows. The issue was first propelled into the news by the intervention of Prince Charles, who raised the subject in newspaper articles in 2003 and 2004. These articles directly echoed concerns raised by the small campaigning group ETC[5]. ETC cast nanotechnology as a direct successor to genetic modification; to summarise this framing, whereas in GM scientists had directly intervened in the code of life, in nanotechnology they meddle with the very atomic structure of matter itself. ETC’s background included a strong record of campaigning on behalf of third world farmers against agricultural biotechnology, so in their view nanotechnology, with its spectre of the possible patenting of new arrangements of atoms and the potential replacement of commodities such as copper and cotton by nanoengineered substitutes controlled by multinationals, was to be opposed as an intrinsic part of the agenda of globalisation. Complementing this rather abstract critique was a much more concrete concern that nanoscale materials might be more toxic than their conventional counterparts, and that current regulatory regimes for the control of environmental exposure to chemicals might not adequately recognise these new dangers.

The latter concern has gained a considerable degree of traction, largely because there has been a very widespread degree of consensus that the issue has some substance. At the time of the Prince’s intervention in the debate (and quite possibly because of it) the UK government commissioned a high-level independent report on the issue from the Royal Society and the Royal Academy of Engineering. This report recommended a program of research and regulatory action on the subject of possible nanoparticle toxicity[6]. Public debate about the risks of nanotechnology has largely focused on this issue, fuelled by a government response to the Royal Society that has been widely considered to be quite inadequate. However, it is possible to regret that the debate has become so focused on this rather technical issue of risk, to the exclusion of wider issues about the potential impacts of nanotechnology on society.

To return to the more fundamental worldviews underlying this critique of nanotechnology, whether they be the rather romantic, ruralist conservatism of the Prince of Wales, or the anti-globalism of ETC, the common feature is a general scepticism about the benefits of scientific and technological “progress”. An extremely eloquent exposition of one version of this point of view is to be found in a book by US journalist Bill McKibben[7]. The title of McKibben’s book – “Enough” – is a succinct summary of its argument; surely we now have enough technology for our needs, and new technology is likely only to lead to further spiritual malaise, through excessive consumerism, or in the case of new and very powerful technologies like genetic modification and nanotechnology, to new and terrifying existential dangers.

Bright greens
Despite the worries about the toxicology of nanoscale particles, and the involvement of groups like ETC, it is notable that all-out opposition to nanotechnology has not yet fully crystallised. In particular, groups such as Greenpeace have not yet articulated a position of unequivocal opposition. This reflects the fact that nanotechnology really does seem to have the potential to provide answers to some pressing environmental problems. For example, there are real hopes that it will lead to new types of solar cells that can be produced cheaply in very large areas. Applications of nanotechnology to problems of water purification and desalination have obvious potential impacts in the developing world. Of course, these kinds of problems have major political and social dimensions, and technical fixes by themselves will not be sufficient. However, the prospects that nanotechnology may be able to make a significant contribution to sustainable development have proved convincing enough to keep mainstream environmental movements at least neutral on the issue.

While some mainstream environmentalists may still remain equivocal in their view of nanotechnology, another group seems to be embracing new technologies with some enthusiasm as providing new ways of maintaining high standards of living in a fully sustainable way. Such “bright greens” dismiss the rejection of industrialised economies and the yearning to return to a rural lifestyle implicit in the “deep green” worldview, and look to the use of new technology, together with imaginative design and planning, to create sustainable urban societies[8]. In this point of view, nanotechnology may help, not just by enabling large scale solar power, but by facilitating an intrinsically less wasteful industrial ecology.


If there is (or indeed, ever was) a time in which there was an “independent republic of science”, disinterestedly pursuing knowledge for its own sake, nanotechnology is not part of it. Nanotechnology, in all its flavours and varieties, is unashamedly “goal-oriented research”. This immediately begs the question “whose goals?” It is this question that underlies recent calls for a greater degree of democratic involvement in setting scientific priorities[9]. It is important that these debates don’t simply concentrate on technical issues. Nanotechnology provides a fascinating and evolving example of the complexity of the interaction between science, technology and wider currents in society. Nanotechnology, with other new and emerging technologies, will have a huge impact on the way society develops over the next twenty to fifty years. Recognising the importance of this impact does not by any means imply that one must take a technologically deterministic view of the future, though. Technology co-evolves with society, and the direction it takes is not necessarily pre-determined. Underlying the directions in which it is steered are a set of competing visions about the directions society should take. These ideologies, which often are left implicit and unexamined, need to be made explicit if a meaningful discussion of the implications of the technology is to take place.

[4] Gibbons, M, et al. (1994) The New Production of Knowledge. London: Sage.
[5] David Berube (in his book Nano-hype, Prometheus, NY 2006) explicitly links the two interventions, and identifies Zac Goldsmith, millionaire organic farmer and editor of “The Ecologist” magazine, as the man who introduced Prince Charles to nanotechnology and the ETC critique. This could be significant, in view of Goldsmith’s current prominence in Conservative Party politics.
[6] Nanoscience and nanotechnologies: opportunities and uncertainties, Royal Society and Royal Academy of Engineering, available from
[7] Enough; staying human in an engineered age, Bill McKibben, Henry Hall, NY (2003)
[8] For a recent manifesto, see Worldchanging: a user’s guide for the 21st century, Alex Steffen (ed.), Harry N. Abrams, NY (2006)
[9] See for example See-through Science: why public engagement needs to move upstream, Rebecca Willis and James Wilsdon, Demos (2004)

Nanotechnology and visions of the future (part 1)

Earlier this year I was asked to write an article explaining nanotechnology and the debates surrounding it for a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. Here is the unedited version, in installments. Regular readers of the blog will be familiar with most of the arguments already, but I hope they will find it interesting to see it all in one place.


Few new technologies have been accompanied by such expansive promises of their potential to change the world as nanotechnology. For some, it will lead to a utopia, in which material want has been abolished and disease is a thing of the past, while others see apocalypse and even the extinction of the human race. Governments and multinationals round the world see nanotechnology as an engine of economic growth, while campaigning groups foresee environmental degradation and a widening of the gap between the rich and poor. But at the heart of these arguments lies a striking lack of consensus about what the technology is or will be, what it will make possible and what its dangers might be. Technologies don’t exist or develop in a vacuum, and nanotechnology is no exception; arguments about the likely, or indeed desirable, trajectory of the technology are as much about their protagonists’ broader aspirations for society as about nanotechnology itself.


Nanotechnology is not a single technology in the way that nuclear technology, agricultural biotechnology, or semiconductor technology are. There is, as yet, no distinctive class of artefacts that can be unambiguously labelled as the product of nanotechnology. It is still, by and large, an activity carried out in laboratories rather than factories, yet the distinctive output of nanotechnology is the production and characterisation of some kind of device, rather than the kind of furthering of fundamental understanding that we would expect from a classical discipline such as physics or chemistry.

What unites the rather disparate group of applied sciences that are referred to as nanotechnologies is simply the length-scale on which they operate. Nanotechnology concerns the creation and manipulation of objects whose size lies somewhere between a nanometer and a few hundred nanometers. To put these numbers in context, it’s worth remembering that as unaided humans, we operate over a range of length-scales that spans a factor of a thousand or so, which we could call the macroscale. Thus the largest objects we can manipulate unaided are about a meter or so in size, while the smallest objects we can manipulate comfortably are about one milimeter. With the aid of light microscopes and tools for micromanipulation, we can also operate on another set of smaller lengthscales, which also spans a factor of a thousand. The upper end of the microscale is thus defined by a millimetre, while the lower end is defined by objects about a micron in size. This is roughly the size of a red blood cell or a typical bacteria, and is about the smallest object that can be easily discerned in a light microscope.

The nanoscale is smaller yet. A micron is one thousand nanometers, and one nanometer is about the size of a medium size molecule. So we can think of the lower limit of the nanoscale as being defined by the size of individual atoms and molecules, while the upper limit is defined by the resolution limits of light microscopes (this limit is somewhat more vague, and one sometimes sees apparently more exact definitions, such as 100 nm, but these in my view are entirely arbitrary).

A number of special features make operating in the nanoscale distinctive. Firstly, there is the question of the tools one needs to see nanoscale structures and to characterise them. Conventional light microscopes cannot resolve structures this small. Electron microscopes can achieve atomic resolution, but they are expensive, difficult to use and prone to artefacts. A new class of techniques – scanning probe microscopies such as scanning tunnelling microscopy and atomic force microscopy – have recently become available which can probe the nanoscale, and the uptake of these relatively cheap and accessible methods has been a big factor in creating the field of nanotechnology.

More fundamentally, the properties of matter themselves often change in interesting and unexpected ways when their dimensions are shrunk to the nanoscale. As a particle becomes smaller, it becomes proportionally more influenced by its surface, which often leads to increases in chemical reactivity. These changes may be highly desirable, yielding, for example, better catalysts for more efficiently effecting chemical transformations, or undesirable, in that they can lead to increased toxicity. Quantum mechanical effects can become important, particularly in the way electrons and light interact, and this can lead to striking and useful effects such as size dependent colour changes. (It’s worth stressing here that while quantum mechanics is counter-intuitive and somewhat mysterious to the uninitiated, it is very well understood and produces definite and quantitative predictions. One sometimes reads that “the laws of physics don’t apply at the nanoscale”. This of course is quite wrong; the laws apply just as they do on any other scale, but sometimes they have different consequences). The continuous restless activity of Brownian motion, that is the manifestation of heat energy at the nanoscale, is dominating. These differences in the way physics works at the nanoscale offer opportunities to achieve new effects, but also means that our intuitions may not always be reliable.

One further feature of the nanoscale is that it is the length scale on which the basic machinery of biology operates. Modern molecular biology and biophysics has revealed a great deal about the sub-cellular apparatus of life, revealing the structure and mode of operation of the astonishingly sophisticated molecular-scale machines that are the basis of all organisms. This is significant in a number of ways. Cell biology provides an existence proof that it is possible to make sophisticated machines on the nanoscale and it provides a model for making such machines. It even provides a toolkit of components that can be isolated from living cells and reassembled in synthetic contexts – this is the enterprise of bionanotechnology. The correspondence of length scales also brings hope that nanotechnology will make it possible to make very specific and targeted interventions into biological systems, leading, it is hoped, to new and powerful methods for medical diagnostics and therapeutics.

Nanotechnology, then, is an eclectic mix of disciplines, including elements of chemistry, physics, materials science, electrical engineering, biology and biotechnology. The way this new discipline has emerged from many existing disciplines is itself very interesting, as it illustrates an evolution of the way science is organised and practised that has occurred largely in response to external events.

The founding myth of nanotechnology places its origin in a lecture given by the American physicist Richard Feynman in 1959, published in 1960 under the title “There’s plenty of room at the bottom”. This didn’t explicitly use the word nanotechnology, but it expressed in visionary and exciting terms the many technical possibilities that would open up if one was able to manipulate matter and make engineering devices on the nanoscale. This lecture is widely invoked by enthusiasts for nanotechnology of all types as laying down the fundamental challenges of the subject, its importance endorsed by the iconic status of Feynman as perhaps the greatest native-born American physicist. However, it seems that the identification of this lecture as a foundational document is retrospective, as there is not much evidence that it made a great deal of impact at the time. Feynman himself did not devote very much further work to these ideas, and the paper was rarely cited until the 1990s.

The word nanotechnology itself was coined by the Japanese scientist Norio Taniguchi in 1974 in the context of ultra-high precision machining. However, the writer who unquestionably propelled the word and the idea into the mainstream was K. Eric Drexler. Drexler wrote a popular and bestselling book “Engines of Creation”, published in 1986, which launched a futuristic and radical vision of a nanotechnology that transformed all aspects of society. In Drexler’s vision, which explicitly invoked Feynman’s lecture, tiny assemblers would be able to take apart and put together any type of matter atom by atom. It would be possible to make any kind of product or artefact from its component atoms at virtually no cost, leading to the end of scarcity, and possibly the end of the money economy. Medicine would be revolutionised; tiny robots would be able to repair the damage caused by illness or injury at the level of individual molecules and individual cells. This could lead to the effective abolition of ageing and death, while a seamless integration of physical and cognitive prostheses would lead to new kinds of enhanced humans. On the downside, free-living, self-replicating assemblers could escape into the wild, outcompete natural life-forms by virtue of their superior materials and design, and transform the earth’s ecosphere into “grey goo”. Thus, in the vision of Drexler, nanotechnology was introduced as a technology of such potential power that it could lead either to the transfiguration of humanity or to its extinction.

There are some interesting and significant themes underlying this radical, “Drexlerite” conception of nanotechnology. One of them is the idea of matter as software. Implicit in Drexler’s worldview is the idea that the nature of all matter can be reduced to a set of coordinates of its constituent atoms. Just as music can be coded in digital form on a CD or MP3 file, and moving images can be reduced to a string of bits, it’s possible to imagine any object, whether an everyday tool, a priceless artwork, or even a natural product, being coded as a string of atomic coordinates. Nanotechnology, in this view, provides an interface between the software world and the physical world; an “assembler” or “nanofactory” generates an object just as a digital printer reproduces an image from its digital, software representation. It is this analogy that seems to make the Drexlerian notion of nanotechnology so attractive to the information technology community.

Predictions of what these “nanofactories” might look like have a very mechanistic feel to them. “Engines of Creation” had little in the way of technical detail supporting it, and included some imagery that felt quite organic and biological. However, following the popular success of “Engines”, Drexler developed his ideas at a more detailed level, publishing another, much more technical book in 1992, called “Nanosystems”. This develops a conception of nanotechnology as mechanical engineering shrunk to atomic dimensions, and it is in this form that the idea of nanotechnology has entered the popular consciousness through science fiction, films and video games. Perhaps the best of all these cultural representations is the science fiction novel “The Diamond Age” by Neal Stephenson, whose conscious evocation of a future shaped by a return to Victorian values rather appropriately mirrors the highly mechanical feel of Drexler’s conception of nanotechnology.

The next major development in nanotechnology was arguably political rather than visionary or scientific. In 2000, President Clinton announced a National Nanotechnology Initiative, with funding of $497 million a year. This initiative survived, and even thrived on, the change of administration in the USA, receiving further support, and funding increases from President Bush. Following this very public initiative from the USA, other governments around the world, and the EU, have similarly announced major funding programs. Perhaps the most interesting aspect of this international enthusiasm for nanotechnology at government level is the degree to which it is shared by countries outside those parts of North America, Europe and the Pacific Rim that are traditionally associated with a high intensity of research and development. India, China, Brazil, Iran and South Africa have all designated nanotechnology as a priority area, and in the case of China at least there is some evidence that their performance and output in nanotechnology is beginning to approach or surpass that of some Western countries, including the UK.

Some of the rhetoric associated with the US National Nanotechnology Initiative in its early days was reminiscent of the vision of Drexler – notably, an early document was entitled “Nanotechnology: shaping the world atom by atom”. Perhaps it was useful that such a radical vision for the world changing potential of nanotechnology was present in the background; even if it was not often explicitly invoked, neither did scientists go out of their way to refute it.

This changed in September 2001, when a special issue of the American popular science magazine “Scientific American” contained a number of contributions that were stingingly critical of the Drexler vision of nanotechnology. The most significant of these were by the Harvard nano-chemist George Whitesides, and the Rice University chemist Richard Smalley. Both argued that the Drexler vision of nanoscale machines was simply impossible on technical grounds. Smalley’s contribution was perhaps the most resonant; Smalley had won a Nobel prize for this discovery of a new form of nanoscale carbon, Buckminster fullerene[1], and so his contribution carried significant weight.

The dispute between Smalley and Drexler ran for a while longer, with a published exchange of letters, but its tone became increasingly vituperative. Nonetheless, the result has been that Drexler’s ideas have been largely discredited in both scientific and business circles. The attitude of many scientists is summed up by IBM’s Don Eigler, the first person to demonstrate the controlled manipulation of individual atoms: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them.”[2]

Drexler has thus become a very polarising figure. My own view is that this is unfortunate. I believe that Drexler and his followers have greatly underestimated the technical obstacles in the way of his vision of shrunken mechanical engineering. Drexler does deserve credit, though, for pointing out that the remarkable nanoscale machinery of cell biology does provide an existence proof that a sophisticated nanotechnology is possible. However, I think he went on to draw the wrong conclusion from this. Drexler’s position is essentially that we will be able greatly to surpass the capabilities of biological nanotechnology by using rational engineering principles, rather than the vagaries of evolution, to design these machines, and by using stiff and strong materials rather than diamond rather than the soft and floppy proteins and membranes of biology. I believe that this fails to recognise the fact that physics does look very different at the nanoscale, and that the design principles used in biology are optimised by evolution for this different environment[3]. From this, it follows that a radical nanotechnology might well be possible, but that it will look much more like biology than engineering.

Whether or in what form radical nanotechnology does turn out to be possible, much of what is currently on the market described as nanotechnology is very much more incremental in character. Products such as nano-enabled sunscreens, anti-stain fabric coatings, or “anti-ageing” creams certainly do not have anything to do with sophisticated nanoscale machines; instead they feature materials, coatings and structures which have some dimensions controlled on the nanoscale. These are useful and even potentially lucrative products, but they certainly do not represent any discontinuity with previous technology.

Between the mundane current applications of incremental nanotechnology, and the implausible speculations of the futurists, there are areas in which it is realistic to hope for substantial impacts from nanotechnology. Perhaps the biggest impacts will be seen in the three areas of energy, healthcare and information technology. It’s clear that there will be a huge emphasis in the coming years on finding new, more sustainable ways to obtain and transmit energy. Nanotechnology could make many contributions in areas like better batteries and fuel cells, but arguably its biggest impact could be in making solar energy economically viable on a large scale. The problem with conventional solar cells is not efficiency, but cost and manufacturing scalability. Plenty of solar energy lands on the earth, but the total area of conventional solar cells produced a year is orders of magnitude too small to make a significant dent in the world’s total energy budget. New types of solar cell using nanotechnology, and drawing inspiration from the natural process of photosynthesis, are in principle compatible with large area, low cast processing techniques like printing, and it’s not unrealistic to imagine this kind of solar cell being produced in huge plastic sheets at very low cost. In medicine, if the vision of cell-by-cell surgery using nanosubmarines isn’t going to happen, the prospect of the effectiveness of drugs being increased and their side-effects greatly reduced through the use of nanoscale delivery devices is much more realistic. Much more accurate and fast diagnosis of diseases is also in prospect.

One area in which nanotechnology can already be said to be present in our lives is information technology. The continuous miniaturisation of computing devices has already reached the nanoscale, and this is reflected in the growing impact of information technology on all aspects of the life of most people in the West. It’s interesting that the economic driving force for the continued development of information technologies is no longer computing in its traditional sense, but largely entertainment, through digital music players and digital imaging and video. The continual shrinking of current technologies will probably continue through the dynamic of Moore’s law for ten or fifteen years, allowing at least another hundred-fold increase in computing power. But at this point a number of limits, both physical and economic, are likely to provide serious impediments to further miniaturisation. New nanotechnologies may alter this picture in two ways. It is possible, but by no means certain, that entirely new computing concepts such as quantum computing or molecular electronics may lead to new types of computer of unprecedented power, permitting the further continuation or even acceleration of Moore’s law. On the other hand, developments in plastic electronics may make it possible to make computers that are not especially powerful, but which are very cheap or even disposable. It is this kind of development that is likely to facilitate the idea of “ubiquitous computing” or “the internet of things”, in which it is envisaged that every artefact and product incorporates a computer able to sense its surroundings and to communicate wirelessly with its neighbours. One can see that as a natural, even inevitable, development of technologies like the radio frequency identification devices (RFID) already used as “smart barcodes” by shops like Walmart, but it is clear also that some of the scenarios envisaged could lead to serious concerns about loss of privacy and, potentially, civil liberties.

[1] Nobel Prize for chemistry, 1996, shared with his Rice colleague Robert Curl and the British chemist Sir Harold Kroto, from Sussex University.
[2] Quoted by Chris Toumey in “Reading Feynman Into Nanotech: Does Nanotechnology Descend From Richard Feynman’s 1959 Talk?” (to be published).
[3] This is essentially the argument of my own book “Soft Machines: Nanotechnology and life”, R.A.L. Jones, OUP (2004).

To be continued…