Intelligent yoghurt by 2025

Yesterday’s edition of the Observer contained the bizarre claim that we’ll soon be able to enhance the intelligence of bacteria by using molecular electronics. This came in an interview with Ian Pearson, who is always described as the resident futurologist of the British telecoms company BT. The claim is so odd that I wondered whether it was a misunderstanding on the part of the journalist, but it seems clear enough in this direct quote from Pearson:

“Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’ “

This is the kind of thing that puts satirists out of business.

The Rat-on-a-chip

I’ve written a number of times about the way in which the debate about the impacts of nanotechnology has been highjacked by the single issue of nanoparticle toxicity, to the detriment of more serious and interesting longer term issues, both positive and negative. The flippant title of this post on the subject – Bad News for Lab Rats – conceals the fact that, while I don’t oppose animal experiments in principle, I’m actually a little uncomfortable about the idea that large numbers of animals should be sacrificed in badly thought out and possibly unnecessary toxicology experiments. So I was very encouraged to read this news feature in Nature (free summary, subscription required for full article) about progress in using microfluidic devices containing cell cultures for toxiological and drug testing. The article features work from Michael Shuler’s group at Cornell, and a company founded by Shuler’s colleague Gregory Baxter, Hurel Corp.

Cancer and nanotechnology

There’s a good review in Nature Reviews: Cancer (with free access) about the ways in which nanotechnology could help the fight against cancer – Cancer Nanotechnology: Opportunities and Challenges . The article, by Ohio State University’s Mauro Ferrari, concentrates on two themes – how nanotechnologies can help diagnose and monitor cancer, and how it could lead to more effective targeting and delivery of anti-cancer agents to tumours.

The extent to which we urgently need better ways of wrapping up therapeutic molecules and getting them safely to their targets is highlighted by a striking figure that the article quotes – if you inject monoclonal antibodies and monitor how many of these molecules get to a target within an organ, the fraction is less than 0.01%. The rest are wasted, which is bad news if these molecules are expensive and difficult to make, and even worse news if, like many anti-cancer drugs, they are highly toxic. How can we make sure that every one of these drug molecules get to where they are needed? One answer is to stuff them into a nanovector, a nanoscale particle that protects the enclosed drug molecules and delivers them to where they are needed. The simplest example of this approach uses a liposome – a bag made from a lipid bilayer. Liposome encapsulated anti-cancer drugs are now clinically used in the treatment of Karposi’s sarcoma and breast and ovarian cancers. But lots of work remains to make nanovectors that are more robust, more resistant to non-specific protein adsorption, and above all which are specifically targeted to the cells they need to reach. Such specific targeting could be achieved by coating the nanovectors with antibodies with specific molecular recognition properties for groups on the surface of the cancer cells. The article cites one cautionary tale that illustrates that this is all more complicated than it looks – a recent simulation suggests that it is possible to get a situation in which targeting a drug precisely to a tumour can make the situation worse, by causing the tumour to break up. It may be necessary not just to target the drug carriers to a tumour, but to make sure that the spatial distribution of the drug through the tumour is right.

The future will probably see complex nanovectors engineered to perform multiple functions, protecting the drugs, getting them through all the barriers and pitfalls that lie between the point at which the drug is administered and the part of the body where it is needed, and releasing them at their target. The recently FDA approved breast cancer drug, Abraxane, is an advance in the right direction; one can think of it as a nanovector that combines two functions. The core of the nanovector consists of a nanoparticulate form of the drug itself; dispersing it so finely dispenses with the need for toxic solvents. And bound to the drug nanoparticle are protein molecules which help the nanoparticles get across the cells that line blood vessels. It’s clear that as more and more functions are designed into nanovectors, there’s a huge amount of scope for increases in drug effectiveness, increases that could amount to orders of magnitude.

New book on Nanoscale Science and Technology

Nanoscale Science and Technology is a new, graduate level interdisciplinary textbook which has just been published by Wiley. It’s based on the Masters Course in Nanoscale Science and Technology that we run jointly between the Universities of Leeds and Sheffield.

Nanoscale Science and Technology Book Cover

The book covers most aspects of modern nanoscale science and technology. It ranges from “hard” nanotechnologies, like the semiconductor nanotechnologies that underly applications like quantum dot lasers, and applications of nanomagnetism like giant magnetoresistance read-heads, via semiconducting polymers and molecular electronics, through to “soft” nanotechnologies such as self-assembling systems and bio-nanotechnology. I co-wrote a couple of chapters, but the heaviest work was done by my colleagues Mark Geoghegan, at Sheffield, and Ian Hamley and Rob Kelsall at Leeds, who, as editors, have done a great job of knitting together the contributions of a number of authors with different backgrounds to make a coherent whole.

Directly reading DNA

As the success of the Human Genome Project has made clear, DNA stores information at very high density – 15 atoms per bit of stored information. But, while biology has evolved some very sophisticated and compact ways of reading that information, we’re stuck with some clunky and expensive methods of sequencing DNA. Of course, driven by the Human Genome Project, the techniques have improved hugely, but it still costs about ten million dollars to sequence a mammal-sized genome (according to this recent press release from the National Institutes of Health). This needs to get much cheaper, not only to unlock the potential of personalised genomic medicine, but also if we are going to use DNA or analogous molecules as stores of information for more general purposes. One thousand dollars a genome is a sum that is often mentioned as a target.

Clearly, it would be great if we could simply manipulate a single DNA molecule and directly read out its sequence. One of the most promising approaches to doing this envisages threading the molecule through a nanoscale hole and measuring some property which changes according to which base is blocking the pore. A recent experiment shows that it is possible, in principle, to do this. The experiment is reported by Ashkenasy, Sanchez-Quesada, and M. Reza Ghadiri, from Scripps, and Bayley from Oxford, in a recent edition of Angewandte Chemie (Angew Chemie Int Ed 44 p1401 (2005)) – the full paper can be downloaded as a PDF here. In this case the pore is formed by a natural pore forming protein in a lipid membrane, and what is measured is the ion current across the membrane.

This approach isn’t new; it originated with David Deamer at Santa Cruz and Dan Branton at Harvard (Branton’s website in particular is an excellent resource). A number of groups around the world are trying to do something similar; there are various variations possible, such as using an artificially engineered nanopore instead of a membrane protein, and using a different probe than the ion current. It feels to me like this ought to work, and this latest demonstration is an important step along the path.

Artificial life and biomimetic nanotechnology

Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.

Nanomagnetics

Nature has some very elegant and efficient solutions to the problems of making nanoscale structures, exploiting the self-assembling properties of information-containing molecules like proteins to great effect. A very promising approach to nanotechnology is to use what biology gives us to make useful nanoscale products and devices. I spent Monday visiting a nanotechnology company that is doing just that. Nanomagnetics is a Bristol based company (I should disclose an interest here, in that I’ve just been appointed to their Science Advisory Board) which exploits the remarkable self-assembled structure of the iron-storage protein ferritin to make nanoscale magnetic particles with uses in data storage, water purification and medicine.

Ferritin

The illustration shows the ferritin structure; 24 individual identical protein molecules come together to form a hollow spherical shell 12 nm in diameter. The purpose of the molecule is to store iron until it is needed; iron ions enter through the pores and are kept inside the shell – given the tendency of iron to form a highly insoluble oxide, if we didn’t have this mechanism for storing the stuff our insides would literally rust up. Nanomagnetics is able to use the hollow shell that ferritin provides as a nanoscale chemical reactor, producing nanoparticles of magnetic iron oxide or other metals of great uniformity in size, and with a protein coat that both stops them sticking together and makes them biocompatible.

One simple, but rather neat, application of these particles is in water purification, in a process called forward osmosis. If you filled a bag made of a nanoporous membrane with sugar syrup, and immersed the bag in dirty water, water would be pulled through the membrane by the osmotic pressure exerted by the concentrated sugar solution. Microbes and contaminating molecules wouldn’t be able to get through the membrane, if its pores are small enough, and you end up with clean sugar solution. There’s a small company from Oregon, USA, HTI , which has commercialised just such a product. Essentially, it produces something like a sports drink from dirty or brackish water, and as such it’s started to prove its value for the military and in disaster relief situations. But what happens if you want to produce not sugar solution, but clean water? If you replace the sugar by magnetic nanoparticles then you can sweep the particles away with a magnetic field and then use them again to produce another batch of water, producing clean water from simple equipment with only a small cost in energy.

The illustration of ferritin is taken from the Protein Database’s Molecule of the Month feature. The drawing is by David S. Goodsell, based on the structure determined by Lawson et al., Nature 349 pp. 541 (1991).

Molecular mail-bags

When cells need to wrap a molecular for safe delivery elsewhere, they use a lipid vesicle or liposome. The building block for a liposome is a lipid bilayer which has folded back on itself to create a closed spherical shell. Liposomes are relatively easy and cheap to make synthetically, and they already find applications in drug delivery systems and expensive cosmetics. But liposomes are delicate – their walls are as thin and insubstantial as a soap bubble, and a much more robust product is obtained if the lipids are replaced by block copolymers – these tough molecular bags are known as polymersomes.

Polymersomes were first demonstrated in 1999 by Dennis Discher and Daniel Hammer, together with Frank Bates, at the University of Minnesota. Here at the University of Sheffield, Giuseppe Battaglia, a PhD student supervised by my collaborator Tony Ryan in the Sheffield Polymer Centre, has been working on polymersomes as part of our research program in soft nanotechnology; last night he took this spectacular image of a polymersome using transmission electron microscopy on a frozen and stained sample.

Cryo-TEM image of a polymersome

The polymersome is made from diblock copolymers – molecules consisting of two polymer chains joined covalently at their ends – of butylene oxide and ethylene oxide. The hydrophobic, butylene oxide segment forms the tough, rubbery wall of the bag, while the ethylene oxide segments extend out into the surrounding water like a fuzzy coating. This hydrophilic coating stabilises the bilayer, but it also will have the effect of protecting the polymersome from any sticky molecules that would otherwise adsorb on the surface. This is important for any potential medical applications; this kind of protein-repelling layer is just what you need to make the polymersome bio-compatible. What is remarkable about this micrograph, obtained using the facilites of the cryo-Electron Microscopy Group in the department of Molecular Biology and Biotechnology at the University of Sheffield, is that this diffuse, fuzzy layer is visible extending beyond the sharply defined hydrophobic shell of the polymersome.

Now we can make these molecular delivery vehicles, we need to work out how to propel them to their targets and induce them to release their loads. We have some ideas about how to do this and I hope I’ll be able to report further progress here.

Exploiting evolution for nanotechnology

In my August Physics World article, The future of nanotechnology, I argued that fears of the loss of control of self-replicating nanobots – resulting in a plague of grey goo – were unrealistic, because it was unlikely that we would be able to “out-engineer evolution”. This provoked this interesting response from a reader, reproduced here with his permission:

Dr. Jones,
I am a graduate student at MIT writing an article about the work of Angela Belcher, a professor here who is coaxing viruses to assemble transistors. I read your article in Physics World, and thought the way you stated the issue as a question of whether we can “out-engineer evolution” clarified current debates about the dangers of nanotechnology. In fact, the article I am writing frames the debate in your terms.

I was wondering whether Belcher’s work might change the debate somewhat. She actually combines evolution and engineering. She directs the evolution of peptides, starting with a peptide library, until she obtains peptides that cling to semiconductor materials or gold. Then she genetically engineers the viruses to express these peptides so that, when exposed to semiconductor precursors, they coat themselves with semiconductor material, forming a single crystal around a long, cylindrical capsid. She also has peptides expressed at the ends that attach to gold electrodes. The combination of the semiconducting wire and electrodes forms a transistor.

Now her viruses are clearly not dangerous. They require a host to replicate, and they can’t replicate once they’ve been exposed to the semiconducting materials or electrodes. They cannot lead to “gray goo.”

Does her method, however, suggest the possibility that we can produce things we could never engineer? Might this lead to molecular machines that could actually compete in the environment?

Any help you could provide in my thinking through this will be appreciated.

Thank you,

Kevin Bullis

Here’s my reply:
Dear Kevin,
You raise an interesting point. I’m familiar with Angela Belcher’s work, which is extremely elegant and important. I touch a little bit on this approach, in which evolution is used in a synthetic setting as a design tool, in my book “Soft Machines”. At the molecular level the use of some kind of evolutionary approach, whether executed at a physical level, as in Belcher’s work, or in computer simulation, seems to me to be unavoidable if we’re going to be able to exploit phenomena like self-assembly to the full.

But I still don’t think it fundamentally changes the terms of the debate. I think there are two separate issues:

1. is cell biology close to optimally engineered for the environment of the (warm, wet) nanoworld?

2. how can we best use design principles learnt from biology to make useful synthetic nanostructures and devices?

In this context, evolution is an immensely powerful design method, and it’s in keeping with the second point that we need to learn to use it. But even though using it might help us approach biological levels of optimality, one can still argue that it won’t help us surpass it.

Another important point revolves around the question of what is being optimised, or in Darwinian terms, what constitutes “fitness”. In our own nano-engineering, we have the ability to specify what is being optimised, that is, what constitutes “fitness”. In Belcher’s work, for example, the “fittest” species might be the one that binds most strongly to a particular semiconductor surface. This is quite different as a measure of fitness than the ability to compete with bacteria in the environment, and what is optimal for our own engineering purposes is unlikely to be optimal for the task of competing in the environment.

Best wishes,
Richard

To which Kevin responded:

Richard,
It does seem likely that engineering fitness would not lead to environmental fitness. Belcher’s viruses, for example, would seem to have
a hard time in the real world, especially once coated in a semiconductor crystal. What if, however, someone made environmental fitness a goal? This does not seem unimaginable. Here at MIT engineers have designed sensors for the military that provide real-time data about the environment. Perhaps someday the military will want devices that can survive and multiply. (The military is always good for a scare. Where would science fiction be without thoughtless generals?)

This leads to the question of whether cells have an optimal design, one that can’t be beat. It may be that such military sensors will not be able to compete. Belcher’s early work had to do with abalone, which evolved a way to transform chalk into a protective lining of nacre. Its access to chalk made an adaptation possible that, presumably, gave it a competitive advantage. Might exposure to novel environments give organisms new tools for competing? I think now also of invasive species overwhelming existing ones. These examples, I realize, do not approach gray goo. As far as I know we’ve nothing to fear from abalone. Might they suggest, however, that
novel cellular mechanisms or materials could be more efficient?

Kevin

To which I replied:
Kevin,
It’s an important step forward to say that this isn’t going to happen by accident, but as you say, this does leave the possibility of someone doing it on purpose (careless generals, mad scientists…). I don’t think one can rule this out, but I think our experience says that for every environment we’ve found on earth (from what we think of as benign, e.g. temperate climates on the earth’s surface, to ones that we think of as very hostile, e.g. hot springs and undersea volcanic vents) there’s some organism that seems very well suited for it (and which doesn’t work so well elsewhere). Does this mean that such lifeforms are always absolutely optimal? A difficult question. But moving back towards practicality, we are so far from understanding how life works at the mechanistic level that would be needed to build a substitute from scratch, that this is a remote question. It’s certainly much less frightening than the very real possibility of danger from modifying existing life-forms, for example by increasing the virulence of pathogens.

Best wishes,
Richard

Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.