Making molecules work

The operation of most living organisms, from bacteria like E. Coli to multi-cellular organisms like ourselves, depends on molecular motors. These are protein-based machines which convert chemical energy to mechanical energy; the work our muscles do depends on many billions of these nanoscale machines all operating together, while individual motors propel bacteria or move materials around inside our cells. Molecular motors work in a very different way to the motors we are familiar with on the macroscopic scale, as has been revealed by some stunning experiments combining structural biology with single molecule biophysics. A good place to start getting a feel for how they work is with these movies of biological motors from Ronald Vale at UCSF.

The motors we use at the macroscopic scale to convert chemical energy to mechanical energy are heat engines, like petrol engines and steam turbines. The fuel is first burnt to convert chemical energy to heat energy, and this heat energy is then converted to useful work. Heat engines rely on the fact that you can maintain part of the engine at a higher temperature than the general environment. For example, in a petrol engine you burn the fuel in a cylinder, and then you extract work by allowing the hot gases expand against a piston. If you made a nanoscale petrol engine, it wouldn’t work, because the heat would diffuse out of the cylinder walls, cooling the gas down before it had a chance to expand. This is because the time taken for a hot body to cool down to ambient temperature depends on the square of its size. At the nanoscale, you can’t maintain significant temperature gradients for any useful length of time, so nanoscale motors have to work at constant temperature. The way biological molecular motors do this is by exploiting molecular shape change – the power stroke is provided by a molecule changing shape in response to the binding and unbinding of the fuel molecules and their products.

In our research at Sheffield we’ve been trying to learn from nature to make crude synthetic molecular motors that operate in the same way, by using molecular shape changes. The molecule we use is a polymer with weak acidic or basic groups along the backbone. For a polyacid, for example, in acidic conditions the molecule is uncharged and hydrophobic; it takes up a collapsed, compact shape. But when the acid is neutralised, the molecule ionises and becomes much more hydrophilic, substantially expanding in size. So, in principle we could use the expansion of a single molecule to do work.

How can we clock the motor, so that rather than just expanding a single time, our molecule will repeatedly cycle between the expanded and the compact shape? In biology, this happens because the reaction of the fuel molecule is actually catalysed by the the motor molecule. Our chemistry isn’t good enough to do this yet, so we use a much cruder approach.

We use a class of chemical reactions in which the chemical conditions spontaneously oscillate, despite the fact that the reactants are added completely steadily. The most famous of these reactions is the Belousov-Zhabotinksy reaction (see here for an explanation and a video of the experiment). With the help of Steve Scott from the University of Leeds, we’ve developed an oscillating reaction in which the acidity spontaneously oscillates over a range that is sufficient to trigger a shape change in our polyacid molecules.

You can see a progress report on our efforts in a paper in Faraday Discussions 128; the abstract is here and you can download the full paper as a PDF here (this is available under the author rights policy of the Royal Society of Chemistry, who own the copyright). We’ve been able to demonstrate the molecular shape change in response to the oscillating chemical reaction at both macroscopic and single chain level in a self-assembled structure. What we’ve not yet been able to do is directly measure the force generated by a single molecule; in principle we should be able to do this with an atomic force microscope whose tip is connected to a single molecule, the other end of which is grafted to a firm surface, but this has proved rather difficult to do in practise. This is high on our list of priorities for the future, together with some ideas about how we can use this motor to do interesting things, like propel a nanoscale object or pump chemicals across a membrane.

This work is a joint effort of my group in the physics department and Tony Ryan’s group in chemistry. In physics, Mark Geoghegan, Andy Parnell, Jon Howse, Simon Martin and Lorena Ruiz-Perez have all been involved in various aspects of the project, while the chemistry has been driven by Colin Crook and Paul Topham.

Nanotechnology – with nature or against it?

I’ve been covering two big debates about nanotechnology here. One the on hand, there’s the question of the relative merits of Drexler’s essentially mechanical vision of nanotechnology and the more biologically inspired soft and biomimetic approaches. On the other, we see the efforts of campaigning groups like ETC to paint nanotechnology as the next step after genetic modification in humanity’s efforts to degrade and control the natural world. Although these debates at first sight look very different, they both revolve around issues of control and our proper relationship with the natural world.

These issues are identified and situated in a deep historical context in a very perceptive article by Bernadette Bensaude-Vincent, of the Philosophy Department in the Université Paris X. The article, Two Cultures of Nanotechnology?, is in HYLE-the International Journal for Philosophy of Chemistry, Vol. 10, No.2 (2004).

The whole article is well worth reading, but this extract gets to the heart of the matter:

“There is nothing new in the current artificialization of nature. Already in antiquity, there were two different and occasionally conflicting views of technology. On the one hand, the arts or technai were considered as working against nature, as contrary to nature. This meaning of the term para-physin provided the ground for repeated condemnations of mechanics and alchemy. On the other hand, the arts – especially agriculture, cooking, and medicine – were considered as assisting or even improving on nature by employing the dynameis or powers of nature. In the former perspective, the artisan, like Plato’s demiurgos, builds up a world by imposing his own rules and rationality on a passive matter. Technology is a matter of control. In the latter perspective the artisan is more like the ship-pilot at sea. He conducts or guides forces and processes supplied by nature, thus revealing the powers inherent in matter. Undoubtedly the mechanicist [i.e. Drexlerian] model of nanotechnology belongs to the demiurgic tradition. It is a technology fascinated by the control and the overtaking of nature.”

Bensaude-Vincent argues soft and biomimetic approaches to nanotechnology fall more naturally into that second culture, conducting or guiding forces and processes supplied by nature, thus revealing the powers inherent in matter.

Nanobiotechnology and the communications industry

One of the UK’s two flagship nanotechnology centres, the Interdisciplinary Research Collaboration in Bionanotechnology at Oxford University, was having its mid-term review yesterday; I was there in my role as a member of the external steering committee. One thing I learnt that had previously passed me by was that one of the largest industrial collaborations they have is not, as one might think, with a pharmaceutical or biomedical company, but with the Japanese telecoms company NTT.

The linkup was announced last October; the $2 million project is concentrated in the area of the study of the function of membrane proteins. Why would they be interested in this? Membrane proteins provide the mechanisms by which living cells sense their surroundings and communicate with the outside world. As the leader of the NTT side of the project, Dr Keiichi Torimitsu, is quoted as saying, “We are especially interested in this field because of the possibility of future applications in the area of human – electronic interfaces.”

Intelligent yoghurt by 2025

Yesterday’s edition of the Observer contained the bizarre claim that we’ll soon be able to enhance the intelligence of bacteria by using molecular electronics. This came in an interview with Ian Pearson, who is always described as the resident futurologist of the British telecoms company BT. The claim is so odd that I wondered whether it was a misunderstanding on the part of the journalist, but it seems clear enough in this direct quote from Pearson:

“Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’ “

This is the kind of thing that puts satirists out of business.

The Rat-on-a-chip

I’ve written a number of times about the way in which the debate about the impacts of nanotechnology has been highjacked by the single issue of nanoparticle toxicity, to the detriment of more serious and interesting longer term issues, both positive and negative. The flippant title of this post on the subject – Bad News for Lab Rats – conceals the fact that, while I don’t oppose animal experiments in principle, I’m actually a little uncomfortable about the idea that large numbers of animals should be sacrificed in badly thought out and possibly unnecessary toxicology experiments. So I was very encouraged to read this news feature in Nature (free summary, subscription required for full article) about progress in using microfluidic devices containing cell cultures for toxiological and drug testing. The article features work from Michael Shuler’s group at Cornell, and a company founded by Shuler’s colleague Gregory Baxter, Hurel Corp.

Cancer and nanotechnology

There’s a good review in Nature Reviews: Cancer (with free access) about the ways in which nanotechnology could help the fight against cancer – Cancer Nanotechnology: Opportunities and Challenges . The article, by Ohio State University’s Mauro Ferrari, concentrates on two themes – how nanotechnologies can help diagnose and monitor cancer, and how it could lead to more effective targeting and delivery of anti-cancer agents to tumours.

The extent to which we urgently need better ways of wrapping up therapeutic molecules and getting them safely to their targets is highlighted by a striking figure that the article quotes – if you inject monoclonal antibodies and monitor how many of these molecules get to a target within an organ, the fraction is less than 0.01%. The rest are wasted, which is bad news if these molecules are expensive and difficult to make, and even worse news if, like many anti-cancer drugs, they are highly toxic. How can we make sure that every one of these drug molecules get to where they are needed? One answer is to stuff them into a nanovector, a nanoscale particle that protects the enclosed drug molecules and delivers them to where they are needed. The simplest example of this approach uses a liposome – a bag made from a lipid bilayer. Liposome encapsulated anti-cancer drugs are now clinically used in the treatment of Karposi’s sarcoma and breast and ovarian cancers. But lots of work remains to make nanovectors that are more robust, more resistant to non-specific protein adsorption, and above all which are specifically targeted to the cells they need to reach. Such specific targeting could be achieved by coating the nanovectors with antibodies with specific molecular recognition properties for groups on the surface of the cancer cells. The article cites one cautionary tale that illustrates that this is all more complicated than it looks – a recent simulation suggests that it is possible to get a situation in which targeting a drug precisely to a tumour can make the situation worse, by causing the tumour to break up. It may be necessary not just to target the drug carriers to a tumour, but to make sure that the spatial distribution of the drug through the tumour is right.

The future will probably see complex nanovectors engineered to perform multiple functions, protecting the drugs, getting them through all the barriers and pitfalls that lie between the point at which the drug is administered and the part of the body where it is needed, and releasing them at their target. The recently FDA approved breast cancer drug, Abraxane, is an advance in the right direction; one can think of it as a nanovector that combines two functions. The core of the nanovector consists of a nanoparticulate form of the drug itself; dispersing it so finely dispenses with the need for toxic solvents. And bound to the drug nanoparticle are protein molecules which help the nanoparticles get across the cells that line blood vessels. It’s clear that as more and more functions are designed into nanovectors, there’s a huge amount of scope for increases in drug effectiveness, increases that could amount to orders of magnitude.

New book on Nanoscale Science and Technology

Nanoscale Science and Technology is a new, graduate level interdisciplinary textbook which has just been published by Wiley. It’s based on the Masters Course in Nanoscale Science and Technology that we run jointly between the Universities of Leeds and Sheffield.

Nanoscale Science and Technology Book Cover

The book covers most aspects of modern nanoscale science and technology. It ranges from “hard” nanotechnologies, like the semiconductor nanotechnologies that underly applications like quantum dot lasers, and applications of nanomagnetism like giant magnetoresistance read-heads, via semiconducting polymers and molecular electronics, through to “soft” nanotechnologies such as self-assembling systems and bio-nanotechnology. I co-wrote a couple of chapters, but the heaviest work was done by my colleagues Mark Geoghegan, at Sheffield, and Ian Hamley and Rob Kelsall at Leeds, who, as editors, have done a great job of knitting together the contributions of a number of authors with different backgrounds to make a coherent whole.

Directly reading DNA

As the success of the Human Genome Project has made clear, DNA stores information at very high density – 15 atoms per bit of stored information. But, while biology has evolved some very sophisticated and compact ways of reading that information, we’re stuck with some clunky and expensive methods of sequencing DNA. Of course, driven by the Human Genome Project, the techniques have improved hugely, but it still costs about ten million dollars to sequence a mammal-sized genome (according to this recent press release from the National Institutes of Health). This needs to get much cheaper, not only to unlock the potential of personalised genomic medicine, but also if we are going to use DNA or analogous molecules as stores of information for more general purposes. One thousand dollars a genome is a sum that is often mentioned as a target.

Clearly, it would be great if we could simply manipulate a single DNA molecule and directly read out its sequence. One of the most promising approaches to doing this envisages threading the molecule through a nanoscale hole and measuring some property which changes according to which base is blocking the pore. A recent experiment shows that it is possible, in principle, to do this. The experiment is reported by Ashkenasy, Sanchez-Quesada, and M. Reza Ghadiri, from Scripps, and Bayley from Oxford, in a recent edition of Angewandte Chemie (Angew Chemie Int Ed 44 p1401 (2005)) – the full paper can be downloaded as a PDF here. In this case the pore is formed by a natural pore forming protein in a lipid membrane, and what is measured is the ion current across the membrane.

This approach isn’t new; it originated with David Deamer at Santa Cruz and Dan Branton at Harvard (Branton’s website in particular is an excellent resource). A number of groups around the world are trying to do something similar; there are various variations possible, such as using an artificially engineered nanopore instead of a membrane protein, and using a different probe than the ion current. It feels to me like this ought to work, and this latest demonstration is an important step along the path.

Artificial life and biomimetic nanotechnology

Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.


Nature has some very elegant and efficient solutions to the problems of making nanoscale structures, exploiting the self-assembling properties of information-containing molecules like proteins to great effect. A very promising approach to nanotechnology is to use what biology gives us to make useful nanoscale products and devices. I spent Monday visiting a nanotechnology company that is doing just that. Nanomagnetics is a Bristol based company (I should disclose an interest here, in that I’ve just been appointed to their Science Advisory Board) which exploits the remarkable self-assembled structure of the iron-storage protein ferritin to make nanoscale magnetic particles with uses in data storage, water purification and medicine.


The illustration shows the ferritin structure; 24 individual identical protein molecules come together to form a hollow spherical shell 12 nm in diameter. The purpose of the molecule is to store iron until it is needed; iron ions enter through the pores and are kept inside the shell – given the tendency of iron to form a highly insoluble oxide, if we didn’t have this mechanism for storing the stuff our insides would literally rust up. Nanomagnetics is able to use the hollow shell that ferritin provides as a nanoscale chemical reactor, producing nanoparticles of magnetic iron oxide or other metals of great uniformity in size, and with a protein coat that both stops them sticking together and makes them biocompatible.

One simple, but rather neat, application of these particles is in water purification, in a process called forward osmosis. If you filled a bag made of a nanoporous membrane with sugar syrup, and immersed the bag in dirty water, water would be pulled through the membrane by the osmotic pressure exerted by the concentrated sugar solution. Microbes and contaminating molecules wouldn’t be able to get through the membrane, if its pores are small enough, and you end up with clean sugar solution. There’s a small company from Oregon, USA, HTI , which has commercialised just such a product. Essentially, it produces something like a sports drink from dirty or brackish water, and as such it’s started to prove its value for the military and in disaster relief situations. But what happens if you want to produce not sugar solution, but clean water? If you replace the sugar by magnetic nanoparticles then you can sweep the particles away with a magnetic field and then use them again to produce another batch of water, producing clean water from simple equipment with only a small cost in energy.

The illustration of ferritin is taken from the Protein Database’s Molecule of the Month feature. The drawing is by David S. Goodsell, based on the structure determined by Lawson et al., Nature 349 pp. 541 (1991).