You can see a video seminar on soft nanotechnology jointly given by me and my colleague Tony Ryan on the web here. This isn’t exactly new; it was done a couple of years ago, but I’ve only just come across the web version, which was done as an experiment in e-learning under the aegis of the Worldwide Universities Network, an alliance of Universities in Europe, the USA and China. You’ll need a fast internet connection and the Shockwave plug-in to view it.
Over the next fifty years, mankind is going to have to find large-scale primary energy sources that aren’t based on fossil fuels. Even if stocks of oil and gas don’t start to run out, the effects of man-made global warming are likely to become so pressing that the most die-hard climate-change sceptics will begin to change their tune. Meanwhile, the inhabitants of the rapidly developing countries of Asia will demand western-style standards of living, which in turn will demand western levels of energy use. Can nanotechnology help deliver the energy needed for all the world to have a decent standard of living on a sustainable basis?
Although wind and hydroelectric energy can make significant dents in total energy requirements, it seems that only two non-fossil primary energy sources really have the potential to replace fossil fuels completely. These are nuclear fission and photovoltaics (solar cells). Nuclear power has well known problems, though there have been recent signs of a change of heart by some environmentalists, notably James Lovelock, about this. Solar power is viable, in the sense that enough sunlight falls on the earth to meet all our needs, but the capital expense of current solar cell technology is too great for it to be economically viable, except in areas remote from the electricity grid.
To make a dent in the world’s total power needs we’re talking about bringing in many gigawatts (GW) of capacity per year (total electricity generating capacity in the UK was around 70 GW in 2002, in the USA it was 905 GW). Roughly speaking 65 million square meters (i.e. 65 square kilometers) of a moderately efficient photovoltaic gives you a GW of power. Here we see the problem of conventional silicon solar cells: a silicon wafer production plant with a 30 cm wafer process produces only 88,000 square meters a year; the cost is high and so is the energy intensity of the process, to the extent that it takes about 4 years to pay back the energy used in manufacture. We need to be able to make solar cells on a continuous basis, using a roll-to-roll process, more like a high volume printing press. A typical printing press takes just a few hours to process the same area of material as a silicon plant does in a year; at this rate we’re approaching the possibility of being able to make a GW’s worth of solar cells (roughly comparable to the output of a nuclear power station) from a year’s output from one production line. Several new technologies based on incremental nanotechnology promise to give us solar cells made by just this sort of cheap, large scale, low energy manufacturing process.
The most famous, and probably best developed technology is the Graetzel cell, invented by Michael Graetzel of the EPFL, Lausanne. This relies on nanostructured titanium dioxide whose surfaces are coated by a dye; the nanoparticles are then embedded in a polymer electrolyte to make a thin film which can be coated onto a plastic sheet. This process is being commercialised by a number of companies, including Konarka and Sustainable Technologies International. Other technologies use nanostructured forms of different kinds of semiconductors; companies involved include Nanosys, Nanosolar, and Solaris. A third class of non-conventional photovoltaics uses semiconducting polymers of the kind used in polymer light emitting diode displays, sometimes in conjunction with fullerenes. These technologies still need to make improvements to their efficiencies and lifetimes to be fully viable, but progress is rapid, and all offer the crucial benefit of low energy, large scale manufacturability.
It’s not at all clear which of these technologies will be the first to deliver the promised benefits. We shouldn’t forget that more conventional technologies, like thin film amorphous silicon, are also advancing fast – Unisolar has a commercial reel-to-reel process for producing this type of solar cell in quantity, with a projected annual production of 30 MW (i.e. 3% of a nuclear power station) coming soon. But it does seem as though this is one area where incremental nanotechnology could have a transformational and positive effect on the economy and the environment.
This discussion draws on two recent articles: Manufacturing and commercialization issues in organic electronics, by J.R. Sheats, Journal of Materials Research 19 1974 (2004), and Organic photovoltaics: technology and market”, by C.J. Brabec, Solar Energy Materials and Solar Cells, 83 273 (2004).
When cells need to wrap a molecular for safe delivery elsewhere, they use a lipid vesicle or liposome. The building block for a liposome is a lipid bilayer which has folded back on itself to create a closed spherical shell. Liposomes are relatively easy and cheap to make synthetically, and they already find applications in drug delivery systems and expensive cosmetics. But liposomes are delicate – their walls are as thin and insubstantial as a soap bubble, and a much more robust product is obtained if the lipids are replaced by block copolymers – these tough molecular bags are known as polymersomes.
Polymersomes were first demonstrated in 1999 by Dennis Discher and Daniel Hammer, together with Frank Bates, at the University of Minnesota. Here at the University of Sheffield, Giuseppe Battaglia, a PhD student supervised by my collaborator Tony Ryan in the Sheffield Polymer Centre, has been working on polymersomes as part of our research program in soft nanotechnology; last night he took this spectacular image of a polymersome using transmission electron microscopy on a frozen and stained sample.
The polymersome is made from diblock copolymers – molecules consisting of two polymer chains joined covalently at their ends – of butylene oxide and ethylene oxide. The hydrophobic, butylene oxide segment forms the tough, rubbery wall of the bag, while the ethylene oxide segments extend out into the surrounding water like a fuzzy coating. This hydrophilic coating stabilises the bilayer, but it also will have the effect of protecting the polymersome from any sticky molecules that would otherwise adsorb on the surface. This is important for any potential medical applications; this kind of protein-repelling layer is just what you need to make the polymersome bio-compatible. What is remarkable about this micrograph, obtained using the facilites of the cryo-Electron Microscopy Group in the department of Molecular Biology and Biotechnology at the University of Sheffield, is that this diffuse, fuzzy layer is visible extending beyond the sharply defined hydrophobic shell of the polymersome.
Now we can make these molecular delivery vehicles, we need to work out how to propel them to their targets and induce them to release their loads. We have some ideas about how to do this and I hope I’ll be able to report further progress here.
The environmental group ETC today released a report strongly opposed to what they refer to as “the atomic modification of food”. This is, of course, what we used to call “cooking”. ETC are now focusing their campaign against nanotechnology onto the agriculture and food industries, perhaps in the hope of replaying the controversy about genetic modification of food. What the report reveals, though, is the slow evolution of ETC’s muddled thinking on the subject.
There is some progress – ETC is now much more explicit about the possible benefits nanotechnology can bring. I very much welcome this statement, for example: “ETC acknowledges that nanotech could bring useful advances that might benefit the poor (the fields of sustainable energy, clean water and clean production appear promising…”. They also emphasise that the debate must go further than simply considering questions of safety. But still, when in doubt about what to criticise, it is the toxicological issues that they consistently return to. And here some of their biggest scientific misconceptions get trotted out again. “The nanoscale moves matter out of the realm of conventional chemistry and physics into “quantum mechanics” imparting unique characteristics to traditional materials – and unique health and safety risks”, the report states early on, and it later refers to “serious toxicity issues of quantum property changes”. But, ironically, it’s by thinking about food and the products of agriculture that we should see that this view that nanoparticles are especially toxic as a class due to quantum effects just can’t be tenable – many or even most food ingredients are naturally nanostructured or contain nanoparticles, but quantum mechanics plays no role in their properties and certainly doesn’t make them especially toxic. If you don’t want to ingest nanoparticles, you should stop drinking milk.
The results of this confusion are apparent in their discussion of nanotechnology in the agrochemical industry. Here there’s a lot of emphasis on the reformulation of agrochemicals in nanoscaled dispersions and in encapsulated and controlled release systems. I think this is an accurate reading of what the industry is concentrating on. But why are the properties of the reformulated products different? ETC admits to some uncertainty – “ETC is not in a position to evaluate whether or not pesticides formulated as nanosized droplets… exhibit property changes akin to the “quantum effects” exhibited by engineered nanoparticles.” But nonetheless, they add, “the impetus for formulating pesticides on the nanoscale is the changed behaviour of the reformulated product”. Here they are missing the point in a big way.
It’s not that any given different pesticide molecule behaves differently when it’s in a nanoscale emulsion than when it’s in a bulk solution; it’s simply that a higher proportion of the active molecules reach the destination where they do their job, and many fewer are wasted. Is this a good thing? If you are using this technology to weaponise a biological or chemical agent, it’s certainly frightening, and ETC are quite right to point out that this technology, like so many in the agrochemical industry, is a dual-use one. But from the point of view of environmental protection and the health of agricultural workers it is entirely a good thing – pesticides are toxic and potentially dangerous chemicals, and if the desired effect can be achieved with a smaller total pesticide burden that’s got to be a good thing. A scientist working formulating agrochemicals once told me “Currently we operate like a hospital that, rather than giving its patients medicines, sprays the hospital car park with antibiotics and hopes the visitors carry enough in on their feet to have some effect”. Finding ways to use powerful chemicals in more frugal and targeted ways seems a positive step forward to me. To elaborate on one example that ETC mention, Syngenta has been working on a long-lasting insecticide treatment for mosquito netting. This seems to me to be an appropriate, low cost and environmentally low impact contribution to a major problem of the developing world – malaria – and I would struggle to find anything about this sort of development one could sensibly oppose.
I’ve already discussed my views on ETC’s thesis that the replacement of commodities like cotton by nano-treated artificial fibres will greatly disadvantage the developing world below, and I’ll not add anything to that. I’ll simply point to the deep inconsistency of claiming on the one hand that nanotechnology poses a threat to farmers by taking markets away, and on the other hand being worried by the idea of new uses for crops as industrial feedstocks.
The section on nanotechnology in food manages to lose even more conviction. In the face of the difficulty of finding very much to get hold of, once again the theme of nanoparticle toxicity recurs. Food additives are being prepared in new, nanoscaled forms, and these haven’t been separately tested. They give as an example lycopene, a naturally occurring nutrient that BASF is bringing to market in a synthetic, nanodispersed form. They quote a patient explanation from BASF that once this stuff reaches the gut it behaves in just the same way as natural lycopene, lamely agree that “the explanation that all food is nano-scale by the time it reaches the bloodstream makes sense a-priori”, and then add the complete non-sequitur that we should worry that it hasn’t been tested in its nanoscale form. “What nano-scale substances are in the pipeline that have already been approved as food additives at larger scales but may now be formulated at the nano-scale with altered properties?” they ask. Let’s take this very slowly – food additives aren’t generally things that are developed on large scales – they’re molecules, and the usual state they arrive at the food manufacturer, and in which the consumer eats them, isn’t in large lumps, but in solution – i.e. about as nanodispersed as it is possible to get.
As in the first ETC report on nanotechnology, The Big Down, it isn’t that real things to worry about aren’t identified. The issues that surround “smart dust” and universal distributed intelligence are serious ones that need some real discussion, and it’s quite right for ETC to highlight this. There are very many very worrying aspects about the way the agri-food industry operates both in the developed and the developing worlds, and left unchecked I’m sure that developments in nanotechnology and nanomedicine could well end up being used in very negative ways. But as before, if ETC showed a bit more discrimination in what they criticised and a bit more understanding of the underlying science their contribution would be a lot more worthwhile.
I rather suspect that this report has been rushed out to hit the Thanksgiving slow news patch in the USA. Maybe it would have been better if ETC had sat on it a little longer, long enough to sort out their misunderstandings and get their message straight.
A very mixed, but very engaged audience, including journalists, artists and business types, attended last night’s discussion of nanotechnology at the Institute of Contemporary Arts. If they enjoyed it as much as I did, they will have got their money’s worth. The mix of panelists – including a science journalist, a science fiction writer and two scientists – worked very well, I thought. Paul McAuley, the science fiction writer, made sure we didn’t concentrate too much on the here and now, while the journalist, Tom Feilden, brought some perspective and some telling comparisons with previous technology debates. Philip Moriarty kicked the evening off, with a trenchant broadside against the Drexlerian vision. His perspective on this is rather different to mine, in that he’s from the “hard” end of nanotechnology and is very familiar with the practical problems of moving atoms around in a scanning tunnelling microscope, so his critique is based on what he sees as a huge practical gaps in Drexler’s implementation path. I should mention that (like me) Philip has read Nanosystems very closely and very carefully. Drexler remained an omnipresent theme through the evening (the ICA had thought about bringing him across in person, but couldn’t afford the fee).
Some questions and themes from the discussion:
Proponents of Drexlerian nanotechnology (MNT) often cite the disruption to the economy that they say will happen when MNT makes the cost of manufacturing everyday products negligibly small. But we’re not far off this situation already; only a fraction of the value in the goods we buy in the shops is added by the manufacturing process (as opposed to design, marketing, retailing and so on). Relentless incremental improvements in manufacturing technology, together with the economic pressures of globalisation, are already causing an unprecedented and sustained drop in the price of consumer goods. There’s rather poignant commentary on this process in today’s Times. It seems that burglary rates have recently precipitately dropped in Britain. Much as politicians would like to attribute this to their far-sighted crime policies, the police instead blame the fact that the traditional things that get stolen in break-ins – televisions, video recorders, computers and so on – are now so cheap to buy new, and are so quickly rendered obsolescent, that the markets for the stolen goods have all but collapsed.
An event at London’s Institute of Contemporary Arts next Tuesday, 23 November, promises an inter-disciplinary panel discussion to imagine the possibilities of nanotechnology for life and art. Nano: the science of small things will be chaired by James Wilsdon, from the think-tank Demos. James is one of the authors of the pamphlet See-Through Science that I mentioned below. On the panel are the science fiction writer Paul McAuley, Tom Feilden, the science and environment correspondent for BBC Radio 4’s Today programme, Philip Moriarty, an outstanding young nanoscientist from the University of Nottingham, and myself.
In my August Physics World article, The future of nanotechnology, I argued that fears of the loss of control of self-replicating nanobots – resulting in a plague of grey goo – were unrealistic, because it was unlikely that we would be able to “out-engineer evolution”. This provoked this interesting response from a reader, reproduced here with his permission:
I am a graduate student at MIT writing an article about the work of Angela Belcher, a professor here who is coaxing viruses to assemble transistors. I read your article in Physics World, and thought the way you stated the issue as a question of whether we can “out-engineer evolution” clarified current debates about the dangers of nanotechnology. In fact, the article I am writing frames the debate in your terms.
I was wondering whether Belcher’s work might change the debate somewhat. She actually combines evolution and engineering. She directs the evolution of peptides, starting with a peptide library, until she obtains peptides that cling to semiconductor materials or gold. Then she genetically engineers the viruses to express these peptides so that, when exposed to semiconductor precursors, they coat themselves with semiconductor material, forming a single crystal around a long, cylindrical capsid. She also has peptides expressed at the ends that attach to gold electrodes. The combination of the semiconducting wire and electrodes forms a transistor.
Now her viruses are clearly not dangerous. They require a host to replicate, and they can’t replicate once they’ve been exposed to the semiconducting materials or electrodes. They cannot lead to “gray goo.”
Does her method, however, suggest the possibility that we can produce things we could never engineer? Might this lead to molecular machines that could actually compete in the environment?
Any help you could provide in my thinking through this will be appreciated.
Here’s my reply:
You raise an interesting point. I’m familiar with Angela Belcher’s work, which is extremely elegant and important. I touch a little bit on this approach, in which evolution is used in a synthetic setting as a design tool, in my book “Soft Machines”. At the molecular level the use of some kind of evolutionary approach, whether executed at a physical level, as in Belcher’s work, or in computer simulation, seems to me to be unavoidable if we’re going to be able to exploit phenomena like self-assembly to the full.
But I still don’t think it fundamentally changes the terms of the debate. I think there are two separate issues:
1. is cell biology close to optimally engineered for the environment of the (warm, wet) nanoworld?
2. how can we best use design principles learnt from biology to make useful synthetic nanostructures and devices?
In this context, evolution is an immensely powerful design method, and it’s in keeping with the second point that we need to learn to use it. But even though using it might help us approach biological levels of optimality, one can still argue that it won’t help us surpass it.
Another important point revolves around the question of what is being optimised, or in Darwinian terms, what constitutes “fitness”. In our own nano-engineering, we have the ability to specify what is being optimised, that is, what constitutes “fitness”. In Belcher’s work, for example, the “fittest” species might be the one that binds most strongly to a particular semiconductor surface. This is quite different as a measure of fitness than the ability to compete with bacteria in the environment, and what is optimal for our own engineering purposes is unlikely to be optimal for the task of competing in the environment.
To which Kevin responded:
It does seem likely that engineering fitness would not lead to environmental fitness. Belcher’s viruses, for example, would seem to have
a hard time in the real world, especially once coated in a semiconductor crystal. What if, however, someone made environmental fitness a goal? This does not seem unimaginable. Here at MIT engineers have designed sensors for the military that provide real-time data about the environment. Perhaps someday the military will want devices that can survive and multiply. (The military is always good for a scare. Where would science fiction be without thoughtless generals?)
This leads to the question of whether cells have an optimal design, one that can’t be beat. It may be that such military sensors will not be able to compete. Belcher’s early work had to do with abalone, which evolved a way to transform chalk into a protective lining of nacre. Its access to chalk made an adaptation possible that, presumably, gave it a competitive advantage. Might exposure to novel environments give organisms new tools for competing? I think now also of invasive species overwhelming existing ones. These examples, I realize, do not approach gray goo. As far as I know we’ve nothing to fear from abalone. Might they suggest, however, that
novel cellular mechanisms or materials could be more efficient?
To which I replied:
It’s an important step forward to say that this isn’t going to happen by accident, but as you say, this does leave the possibility of someone doing it on purpose (careless generals, mad scientists…). I don’t think one can rule this out, but I think our experience says that for every environment we’ve found on earth (from what we think of as benign, e.g. temperate climates on the earth’s surface, to ones that we think of as very hostile, e.g. hot springs and undersea volcanic vents) there’s some organism that seems very well suited for it (and which doesn’t work so well elsewhere). Does this mean that such lifeforms are always absolutely optimal? A difficult question. But moving back towards practicality, we are so far from understanding how life works at the mechanistic level that would be needed to build a substitute from scratch, that this is a remote question. It’s certainly much less frightening than the very real possibility of danger from modifying existing life-forms, for example by increasing the virulence of pathogens.
When I was a small boy I could tell when Christmas was imminent; sometime around mid November the annuals published by my favourite comics appeared in the newsagents. There then followed six weeks of agonised waiting until the Beano annual appeared under the Christmas tree. Things are different now. My favourite comic characters now seem to have become leading politicians. I don’t have to wait until Christmas anymore, because I can just buy the annual myself, but sadly the annual I seem to be buying isn’t from the Beano but from the Economist.
The World in 2005 is written with the Economist’s usual mix of self-confidence and breezy optimism (I thought this prediction – “the Middle East will end the year looking either much better or far worse” – is an absolute classic of the genre). Nanotechnology gets a little box, predicting that 2005 will see the first year in which corporations outspend governments in nanotechnology, and that this will be the year in which we will see the arrival of many more nanotechnology-enabled products. The usual suspects are paraded – nano-strengthened tennis raquets, stain-resistant fabrics and self-cleaning window glass. Perhaps more interestingly, the article points to NEC’s announcement of a fuel-cell powered notebook PC, using carbon nanotubes in the electrodes. Other reports, however, suggest that this technology won’t be commercialised until 2007. Nonetheless, this does support the idea that energy technologies will be an important and potentially transformative application of near to medium-term nanotechnology.
What are the possible impacts of nanotechnology? The answer you get depends on which of nanotechnology’s warring camps you ask. On the negative side, the supporters of Drexler paint a chilling picture of economies dislocated, overwhelming military hegemony for the technology’s developers, and at worst global catastrophe. The nanotechnology mainstream in science and business doesn’t accept that Drexler’s vision is feasible; given this there’s a tendency in these circles to downplay the seriousness of nanotechnology’s potential negative consequences. In this, quite widespread, view, there may be some worries about the toxicity of nanoparticles to be investigated, but by and large we can expect business as usual. I think both views are wrong.
As I’ve made clear in many places, I doubt that Drexler’s vision of nanotechnology will come to pass. But when we come to discuss the impacts that nanotechnology might have, this matters less than one might think. I disagree with the analysis of Drexlerian groups like the Centre for Responsible Nanotechnology on many economic grounds as well as scientific ones, but there are a surprising number of places where I think that what they predict as impacts of Drexlerian nanotechnology will happen anyway. In fact, quite a few of these impacts are underway right now.
The debate about the social consequences of nanotechnology is becoming polarised in exactly the same way as the technical debate. This is unhealthy and unnecessary; many of the impacts of technology are independent of the precise form that the technology takes. If computing power, in 30 years, is much cheaper and much more ubiquitous than it is now, then the social consequences that follow from that don’t depend on whether those computers are powered by molecular electronics, quantum computing or Drexler’s rod logic.
Nanobusiness and nanoscientists need to raise themselves above their next grant proposal and funding round and start to think through the ways in which nanotechnology will be changing the world on a 20-30 years timescale. Prediction is very difficult, especially about the future (to quote Niels Bohr). But we do need to be thinking about bigger issues than how to regulate the disposal of nanotube enabled tennis rackets, important though it is to get those things right. The development of ubiquitous and ambient computing, the blurring of the line between human and machine; these are big issues that do deserve attention. And on the positive side, it’s going to be increasingly difficult to sell the huge outlays of taxpayers money by referring to the benefits of better cosmetics, important markets though those are. It’s not as though humanity isn’t facing some big challenges, and nanotechnology, if directed appropriately, could make some big positive impacts. Moving to a sustainable energy economy is one of our biggest challenges, and this is an area in which Richard Smalley has been rightly emphasising the transformational contributions incremental and evolutionary nanotechnologies can make .
Meanwhile, followers of Drexler are in danger of finding themselves in denial about the potential impact of ordinary, evolutionary nanotechnology, because of their devotion to their brand of nanotechnology’s one true path. As they continue to insist that the development of true nanotechnology is being thwarted for short-sighted political reasons, they may overlook the far-reaching changes that evolutionary nanotechnology will bring. It would be ironic if, in thirty years, the Drexlerites find themselves still waiting for a revolution that’s already happened.