Nanotechnology – with nature or against it?

I’ve been covering two big debates about nanotechnology here. One the on hand, there’s the question of the relative merits of Drexler’s essentially mechanical vision of nanotechnology and the more biologically inspired soft and biomimetic approaches. On the other, we see the efforts of campaigning groups like ETC to paint nanotechnology as the next step after genetic modification in humanity’s efforts to degrade and control the natural world. Although these debates at first sight look very different, they both revolve around issues of control and our proper relationship with the natural world.

These issues are identified and situated in a deep historical context in a very perceptive article by Bernadette Bensaude-Vincent, of the Philosophy Department in the Université Paris X. The article, Two Cultures of Nanotechnology?, is in HYLE-the International Journal for Philosophy of Chemistry, Vol. 10, No.2 (2004).

The whole article is well worth reading, but this extract gets to the heart of the matter:

“There is nothing new in the current artificialization of nature. Already in antiquity, there were two different and occasionally conflicting views of technology. On the one hand, the arts or technai were considered as working against nature, as contrary to nature. This meaning of the term para-physin provided the ground for repeated condemnations of mechanics and alchemy. On the other hand, the arts – especially agriculture, cooking, and medicine – were considered as assisting or even improving on nature by employing the dynameis or powers of nature. In the former perspective, the artisan, like Plato’s demiurgos, builds up a world by imposing his own rules and rationality on a passive matter. Technology is a matter of control. In the latter perspective the artisan is more like the ship-pilot at sea. He conducts or guides forces and processes supplied by nature, thus revealing the powers inherent in matter. Undoubtedly the mechanicist [i.e. Drexlerian] model of nanotechnology belongs to the demiurgic tradition. It is a technology fascinated by the control and the overtaking of nature.”

Bensaude-Vincent argues soft and biomimetic approaches to nanotechnology fall more naturally into that second culture, conducting or guiding forces and processes supplied by nature, thus revealing the powers inherent in matter.

Nanobiotechnology and the communications industry

One of the UK’s two flagship nanotechnology centres, the Interdisciplinary Research Collaboration in Bionanotechnology at Oxford University, was having its mid-term review yesterday; I was there in my role as a member of the external steering committee. One thing I learnt that had previously passed me by was that one of the largest industrial collaborations they have is not, as one might think, with a pharmaceutical or biomedical company, but with the Japanese telecoms company NTT.

The linkup was announced last October; the $2 million project is concentrated in the area of the study of the function of membrane proteins. Why would they be interested in this? Membrane proteins provide the mechanisms by which living cells sense their surroundings and communicate with the outside world. As the leader of the NTT side of the project, Dr Keiichi Torimitsu, is quoted as saying, “We are especially interested in this field because of the possibility of future applications in the area of human – electronic interfaces.”

Intelligent yoghurt by 2025

Yesterday’s edition of the Observer contained the bizarre claim that we’ll soon be able to enhance the intelligence of bacteria by using molecular electronics. This came in an interview with Ian Pearson, who is always described as the resident futurologist of the British telecoms company BT. The claim is so odd that I wondered whether it was a misunderstanding on the part of the journalist, but it seems clear enough in this direct quote from Pearson:

“Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’ “

This is the kind of thing that puts satirists out of business.

The Rat-on-a-chip

I’ve written a number of times about the way in which the debate about the impacts of nanotechnology has been highjacked by the single issue of nanoparticle toxicity, to the detriment of more serious and interesting longer term issues, both positive and negative. The flippant title of this post on the subject – Bad News for Lab Rats – conceals the fact that, while I don’t oppose animal experiments in principle, I’m actually a little uncomfortable about the idea that large numbers of animals should be sacrificed in badly thought out and possibly unnecessary toxicology experiments. So I was very encouraged to read this news feature in Nature (free summary, subscription required for full article) about progress in using microfluidic devices containing cell cultures for toxiological and drug testing. The article features work from Michael Shuler’s group at Cornell, and a company founded by Shuler’s colleague Gregory Baxter, Hurel Corp.

Cancer and nanotechnology

There’s a good review in Nature Reviews: Cancer (with free access) about the ways in which nanotechnology could help the fight against cancer – Cancer Nanotechnology: Opportunities and Challenges . The article, by Ohio State University’s Mauro Ferrari, concentrates on two themes – how nanotechnologies can help diagnose and monitor cancer, and how it could lead to more effective targeting and delivery of anti-cancer agents to tumours.

The extent to which we urgently need better ways of wrapping up therapeutic molecules and getting them safely to their targets is highlighted by a striking figure that the article quotes – if you inject monoclonal antibodies and monitor how many of these molecules get to a target within an organ, the fraction is less than 0.01%. The rest are wasted, which is bad news if these molecules are expensive and difficult to make, and even worse news if, like many anti-cancer drugs, they are highly toxic. How can we make sure that every one of these drug molecules get to where they are needed? One answer is to stuff them into a nanovector, a nanoscale particle that protects the enclosed drug molecules and delivers them to where they are needed. The simplest example of this approach uses a liposome – a bag made from a lipid bilayer. Liposome encapsulated anti-cancer drugs are now clinically used in the treatment of Karposi’s sarcoma and breast and ovarian cancers. But lots of work remains to make nanovectors that are more robust, more resistant to non-specific protein adsorption, and above all which are specifically targeted to the cells they need to reach. Such specific targeting could be achieved by coating the nanovectors with antibodies with specific molecular recognition properties for groups on the surface of the cancer cells. The article cites one cautionary tale that illustrates that this is all more complicated than it looks – a recent simulation suggests that it is possible to get a situation in which targeting a drug precisely to a tumour can make the situation worse, by causing the tumour to break up. It may be necessary not just to target the drug carriers to a tumour, but to make sure that the spatial distribution of the drug through the tumour is right.

The future will probably see complex nanovectors engineered to perform multiple functions, protecting the drugs, getting them through all the barriers and pitfalls that lie between the point at which the drug is administered and the part of the body where it is needed, and releasing them at their target. The recently FDA approved breast cancer drug, Abraxane, is an advance in the right direction; one can think of it as a nanovector that combines two functions. The core of the nanovector consists of a nanoparticulate form of the drug itself; dispersing it so finely dispenses with the need for toxic solvents. And bound to the drug nanoparticle are protein molecules which help the nanoparticles get across the cells that line blood vessels. It’s clear that as more and more functions are designed into nanovectors, there’s a huge amount of scope for increases in drug effectiveness, increases that could amount to orders of magnitude.

New book on Nanoscale Science and Technology

Nanoscale Science and Technology is a new, graduate level interdisciplinary textbook which has just been published by Wiley. It’s based on the Masters Course in Nanoscale Science and Technology that we run jointly between the Universities of Leeds and Sheffield.

Nanoscale Science and Technology Book Cover

The book covers most aspects of modern nanoscale science and technology. It ranges from “hard” nanotechnologies, like the semiconductor nanotechnologies that underly applications like quantum dot lasers, and applications of nanomagnetism like giant magnetoresistance read-heads, via semiconducting polymers and molecular electronics, through to “soft” nanotechnologies such as self-assembling systems and bio-nanotechnology. I co-wrote a couple of chapters, but the heaviest work was done by my colleagues Mark Geoghegan, at Sheffield, and Ian Hamley and Rob Kelsall at Leeds, who, as editors, have done a great job of knitting together the contributions of a number of authors with different backgrounds to make a coherent whole.

Directly reading DNA

As the success of the Human Genome Project has made clear, DNA stores information at very high density – 15 atoms per bit of stored information. But, while biology has evolved some very sophisticated and compact ways of reading that information, we’re stuck with some clunky and expensive methods of sequencing DNA. Of course, driven by the Human Genome Project, the techniques have improved hugely, but it still costs about ten million dollars to sequence a mammal-sized genome (according to this recent press release from the National Institutes of Health). This needs to get much cheaper, not only to unlock the potential of personalised genomic medicine, but also if we are going to use DNA or analogous molecules as stores of information for more general purposes. One thousand dollars a genome is a sum that is often mentioned as a target.

Clearly, it would be great if we could simply manipulate a single DNA molecule and directly read out its sequence. One of the most promising approaches to doing this envisages threading the molecule through a nanoscale hole and measuring some property which changes according to which base is blocking the pore. A recent experiment shows that it is possible, in principle, to do this. The experiment is reported by Ashkenasy, Sanchez-Quesada, and M. Reza Ghadiri, from Scripps, and Bayley from Oxford, in a recent edition of Angewandte Chemie (Angew Chemie Int Ed 44 p1401 (2005)) – the full paper can be downloaded as a PDF here. In this case the pore is formed by a natural pore forming protein in a lipid membrane, and what is measured is the ion current across the membrane.

This approach isn’t new; it originated with David Deamer at Santa Cruz and Dan Branton at Harvard (Branton’s website in particular is an excellent resource). A number of groups around the world are trying to do something similar; there are various variations possible, such as using an artificially engineered nanopore instead of a membrane protein, and using a different probe than the ion current. It feels to me like this ought to work, and this latest demonstration is an important step along the path.

Artificial life and biomimetic nanotechnology

Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.


Nature has some very elegant and efficient solutions to the problems of making nanoscale structures, exploiting the self-assembling properties of information-containing molecules like proteins to great effect. A very promising approach to nanotechnology is to use what biology gives us to make useful nanoscale products and devices. I spent Monday visiting a nanotechnology company that is doing just that. Nanomagnetics is a Bristol based company (I should disclose an interest here, in that I’ve just been appointed to their Science Advisory Board) which exploits the remarkable self-assembled structure of the iron-storage protein ferritin to make nanoscale magnetic particles with uses in data storage, water purification and medicine.


The illustration shows the ferritin structure; 24 individual identical protein molecules come together to form a hollow spherical shell 12 nm in diameter. The purpose of the molecule is to store iron until it is needed; iron ions enter through the pores and are kept inside the shell – given the tendency of iron to form a highly insoluble oxide, if we didn’t have this mechanism for storing the stuff our insides would literally rust up. Nanomagnetics is able to use the hollow shell that ferritin provides as a nanoscale chemical reactor, producing nanoparticles of magnetic iron oxide or other metals of great uniformity in size, and with a protein coat that both stops them sticking together and makes them biocompatible.

One simple, but rather neat, application of these particles is in water purification, in a process called forward osmosis. If you filled a bag made of a nanoporous membrane with sugar syrup, and immersed the bag in dirty water, water would be pulled through the membrane by the osmotic pressure exerted by the concentrated sugar solution. Microbes and contaminating molecules wouldn’t be able to get through the membrane, if its pores are small enough, and you end up with clean sugar solution. There’s a small company from Oregon, USA, HTI , which has commercialised just such a product. Essentially, it produces something like a sports drink from dirty or brackish water, and as such it’s started to prove its value for the military and in disaster relief situations. But what happens if you want to produce not sugar solution, but clean water? If you replace the sugar by magnetic nanoparticles then you can sweep the particles away with a magnetic field and then use them again to produce another batch of water, producing clean water from simple equipment with only a small cost in energy.

The illustration of ferritin is taken from the Protein Database’s Molecule of the Month feature. The drawing is by David S. Goodsell, based on the structure determined by Lawson et al., Nature 349 pp. 541 (1991).

Molecular mail-bags

When cells need to wrap a molecular for safe delivery elsewhere, they use a lipid vesicle or liposome. The building block for a liposome is a lipid bilayer which has folded back on itself to create a closed spherical shell. Liposomes are relatively easy and cheap to make synthetically, and they already find applications in drug delivery systems and expensive cosmetics. But liposomes are delicate – their walls are as thin and insubstantial as a soap bubble, and a much more robust product is obtained if the lipids are replaced by block copolymers – these tough molecular bags are known as polymersomes.

Polymersomes were first demonstrated in 1999 by Dennis Discher and Daniel Hammer, together with Frank Bates, at the University of Minnesota. Here at the University of Sheffield, Giuseppe Battaglia, a PhD student supervised by my collaborator Tony Ryan in the Sheffield Polymer Centre, has been working on polymersomes as part of our research program in soft nanotechnology; last night he took this spectacular image of a polymersome using transmission electron microscopy on a frozen and stained sample.

Cryo-TEM image of a polymersome

The polymersome is made from diblock copolymers – molecules consisting of two polymer chains joined covalently at their ends – of butylene oxide and ethylene oxide. The hydrophobic, butylene oxide segment forms the tough, rubbery wall of the bag, while the ethylene oxide segments extend out into the surrounding water like a fuzzy coating. This hydrophilic coating stabilises the bilayer, but it also will have the effect of protecting the polymersome from any sticky molecules that would otherwise adsorb on the surface. This is important for any potential medical applications; this kind of protein-repelling layer is just what you need to make the polymersome bio-compatible. What is remarkable about this micrograph, obtained using the facilites of the cryo-Electron Microscopy Group in the department of Molecular Biology and Biotechnology at the University of Sheffield, is that this diffuse, fuzzy layer is visible extending beyond the sharply defined hydrophobic shell of the polymersome.

Now we can make these molecular delivery vehicles, we need to work out how to propel them to their targets and induce them to release their loads. We have some ideas about how to do this and I hope I’ll be able to report further progress here.