A session at the British Association’s annual meeting in September, which this year is being held in Dublin, is devoted to a debate on the topic “Should we enhance ourselves: does nanotechnology have limits”. The debate, which is between 7 pm and 9 pm on Tuesday 6 September, has been put together by Donald Bruce, the Director of the Church of Scotland’s Science, Religion and Technology Project. The speakers are myself, Donald, and Paul Galvin, teamleader for Nanobiotechnology at the Tyndall National Institute in Cork.
What’s going to be the quickest way of achieving some kind of radical nanotechnology, in which sophisticated nanoscale machines carry out complex chemical tasks? Since nature has evolved sophisticated and effective nanomachines that are optimised for the nanoscale environment, an obvious approach is to take components from living systems and reassemble them to do the tasks you want. This is the approach of bionanotechnology. But we could take this logic further. Rather than rebuilding systems from individual biological components, we could take a complete organism, strip out the functions we don’t want, and patch in the genetic code for the components we need. This top-down approach to bionanotechnology is exactly what is being proposed by a new company, Synthetic Genomics Inc, founded in June by Craig Venter. Venter is, of course, the scientist behind the private sector venture to sequence the human genome. The initial focus will be on the use of these partly synthetic organisms to make alternative fuels such as hydrogen and ethanol.
The vehicle for these strange hybrids is likely to be the parasitic bacteria Mycoplasma genitalium, an unwelcome inhabitant of some people’s urinary tracts, which currently has the distinction of having the smallest known genome. This is contained on a mere 580,000 base pairs of DNA, coding for about 480 proteins and 40 RNA molecules. Venter’s group systematically knocked out genes from this organism in an attempt to find a so-called minimal genome. One can think of this as the simplest possible fully functioning life-form (of course, such an organism would be very restricted in the environment it can live in). In Venter’s 1999 paper in Science, Global transposon mutagenesis and a minimal mycoplasma genome, a further 100 proteins were eliminated without fatally compromising the organisms’ existence. Having stripped the organism down to a minimal level of complexity, the idea would be to reinsert synthetic genes coding for whatever machinery you require.
There are two questions to ask about this: will it work, and should it be done? It’s certainly a very bold commitment to a very reductionist view of life: in their words “using the genome as a bio-factory, a custom designed, modular cassette system will be developed so that the organism executes specific molecular functions”. As for the ethics of the enterprise, I’m sure even the most enthusiastic technophile would at least pause to think about the implications of attempting to re-engineer life on this scale. Indeed, Venter’s group commissioned their own bioethicists to think about the issues, and this ethical commentary accompanied their original Science article. This is just the beginning of a very big story.
The kind of DNA-based nanotechnology pioneered by New York University’s Ned Seeman is currently the closest thing we have to the radical aim of making nanoscale structures and machines with atomic precision, but the development of the technology is limited by cost. DNA is an expensive molecule – currently it costs about $5000 a gram to make short, synthetic DNA sequences.
The cost of synthetic DNA has been dropping, but a new company is promising orders of magnitude drops in cost for much longer sequences of DNA. The company, Codon Devices, is commercialising methods developed in George Church’s group at Harvard Medical School – the method is describe in this Nature paper (subscription required for full paper): Accurate multiplex gene synthesis from programmable DNA microchips.
It’s not DNA nanotechnology that the company cites as its major potential market, though. Their ambition is to make synthetic genes for synthetic organisms, in the emerging field of synthetic biology.
RNA interference is one of the most fascinating biological discoveries of the last few years, and there’s excitement that it could lead to a new class of powerful drugs which would be an absolutely specific treatment both for viral diseases and cancers. But these drugs, based on short lengths of RNA, need to be introduced into the target cell. A recent paper in Nature Biotechnology – Potent and persistent in vivo anti-HBV activity of chemically modified siRNAs by Morissey et al (subscription required) – suggests that encapsulating the RNA in a liposome can do the job.
In the normal process of gene expression, the genetic code for is transferred from the cell’s DNA, where the information is stored, to the ribosome where the corresponding protein is made in the form of a molecule of RNA – messenger RNA. It turns out that there’s a naturally occurring cellular process that destroys messenger RNA when it’s been marked with a short piece of RNA which binds to it. This RNA interference process was named Science Magazine’s breakthrough of the year in 2002 (needs free registration). These short interfering RNA molecules can thus be used to inactivate one individual gene. To quote from a January 2004 article by Richard Robinson in Public Library of Science: Biology – RNAi Therapeutics: How Likely, How Soon? – “The clinical applications appear endless: any gene whose expression contributes to disease is a potential target, from viral genes to oncogenes to genes responsible for heart disease, Alzheimer’s disease, diabetes, and more.”
But bits of free RNA floating around the body are soon identified and destroyed – after all, they are most likely to originate in viruses. And the highly charged RNA molecule can’t penetrate the lipid bilayer that separates a cell from its surroundings. To quote from the Robinson article again: “stability and delivery are also the major obstacles to successful RNAi therapy, obstacles that are intrinsic to the biochemical nature of RNA itself, as well as the body’s defenses against infection with foreign nucleotides.” The Nature Biotechnology article describes the work of scientists from a pharmaceutical company trying to bring this technology to the clinic – Sirna therapeutics. They have shown that by using a lipid-based nanoparticle delivery system they can get good results treating hepatitis B virus in an animals. The delivery system is essentially a liposome, a self-assembled hollow shell formed by a phospholipid sheet which has folded round on itself to form an enclosed surface, but I suspect there’s quite a lot of art to selecting the mixture of lipids to use. This includes charged lipids which probably bind to the RNA, lipids to promote uptake of the delivery device by the cell, and lipids bound to protective polyethylene glycol hairs to disguise the liposomes from the body’s defenses.
The newly relaunched Foresight Institute – now officially the Foresight Nanotech Institute, with a mission of “Advancing Beneficial Nanotechnology” – holds its annual conference from October 22 to 27th in San Francisco. I was very pleased to get an invitation to talk in the first part of the meeting – the Vision Weekend. I’ll be taking the opportunity to set out some of my more speculative thoughts about how we might learn lessons from nature to make a radical nanotechnology based on some of the design principles used by cell biology.
Electrical phenomena are important in biology, as Galvani discovered long ago when he learnt to make dead frogs twitch. But in biology electrical currents are generally carried by currents of ions rather than electrons. The transport of electrons is important in processes like photosynthesis, but the distances over which the electrons are transported are very small – the nanometer or two that defines the thickness of a lipid membrane. So the discovery of what look like electrically conducting nanowires in a soil bacterium is rather surprising. The discovery, from a group at UMASS Amherst (press release here), was reported in Nature (subscription required for full article) a few weeks ago.
The bacteria in question are soil bacteria that make their living by metabolising iron; to do this they seem to have evolved electrically conducting filaments called pili that allow them to do electrochemistry at a distance on a particle of iron oxide. Pili are common in many types of bacteria; they’re used by pathogenic bacteria to inject toxins into host cells, and for transfer of DNA between bacteria. They’re composed of protein molecules which self-assemble into long filaments, which are anchored into the bacterial cell wall by a large protein complex.
This report still leaves some unanswered questions in my mind. The conductivity of the pili was measured using atomic force microscope based conductance mapping of a graphite surface decorated with pili that had been broken off bacterial surfaces; it would be more convincing (though much more difficult) to quantify the conductivity along the length of the filament, rather than across the thickness. More importantly, perhaps, it doesn’t yet seem to be clear what is the structural feature of the pilus-making protein in this particular bacteria that leads to its electrical conductivity (as opposed to pili from other types of bacteria, which are shown in the paper to be non-conductive). It’s still a remarkable and suggestive result, though.
Thanks to Jim Moore for a comment drawing my attention to this press release.
This is a draft of a piece I’ve been invited to write for the special edition of Journal of Polymer Science: Polymer Physics Edition that is associated with the March meeting of the American Physical Society. The editors invited views from a few people about where they saw the future of polymer science. Here’s my contribution, with themes that will be familiar to readers of Soft Machines. Since the intended audience consists of active researchers in polymer science, the piece has more unexplained technical language than I usually use here.
In the first half of the twentieth century, polymer science and biochemistry developed together. With synthetic polymer chemistry in its infancy, most laboratory examples of macromolecules were of natural origin, and the conceptual foundations of polymer science, such as Staudinger’s macromolecular hypothesis, were as important for biology as for chemistry. Techniques for the physical characterisation of macromolecules, like Svedberg’s ultracentrifuge, were applied as much to biological macromolecules as synthetic ones. But with the tremendous development of the field of structural biology that x-ray protein crystallography made possible, the preoccupations of polymer science increasingly diverged from those of what was now being termed molecular biology. The issues that are so central to protein structure – secondary and tertiary structural motifs, ligand-receptor interactions and allostery, had no real analogue in synthetic polymer science. Meanwhile, the issues that exercised polymer scientists – crystallisation, melt dynamics and rheology – had little relevance to biology. Of course there were exceptions, but conceptually and culturally the two disciplines had become worlds apart.
I believe that the next fifty years we need to see much more interaction between polymer science and cell biology. In polymer science, we’ve seen the focus shift away from the properties of bulk materials to the search for new functionality by design at the molecular level. In cell biology, the new methods of single molecule biophysics permit us to study the behaviour of biological macromolecules in their natural habitat, rather than in a protein crystal, allowing us to see how these molecular machines actually work. Meanwhile synthetic polymer chemistry has started to give us access to control over molecular architecture. This is not yet at the precision that we obtain from biology, but we are already seeing the exploitation of non-trivial macromolecular architectures to achieve control over structure and function. The next stage is surely to take the insights from single molecule biophysics about how biological molecular machines work and design synthetic molecules to perform similar tasks.
We could call this field biomimetic nanotechnology. Biomimetics, of course, is a well-known field in material science; what we are talking about here is biomimetics at the level of single molecules, at the level of cell biology. Can we make synthetic analogues of molecular motors and other energy conversion devices? Can we learn from membrane biophysics to make selective pumps and valves, which would allow the easy and energy-efficient separation and sorting of molecules? Will it be possible to create any synthetic analogue of the systems of molecular sensing, communication and computation that systems biology is just starting to unravel? It’s surely only by achieving this degree of nanoscale control that the promise of molecular medicine could be fulfilled, to give just one example of a potential application.
What are the areas of polymer science that need to be advanced to enable these developments? Obviously, in polymer chemistry, synthesis with precise architectural control is key, and achieving this goal in water-soluble systems is going to be important if this technology is going to find wide use, particularly in medical applications. Polymer physicists are still much less comfortable dealing with systems involving water and charges than with polymer solutions in simple non-polar solvents, and we’ll need more work to ensure that we have a good understanding of the physical environment in which our devices will be operating.
The importance of self-assembly as a central theme will continue to grow. This way of creating intricate nanostructures by programmed interactions in macromolecules is well known to polymer science; the richness of the morphologies that can be obtained in block copolymer systems is well-known. But in comparison with the sophistication of biological self-assembly, synthetic self-assembly still operates at a very crude level. One new element that we should import from biology is the exploitation of secondary structure and its coupling to nanoscale morphology. Another important idea is to exploit the single chain folding of a sequenced copolymer in an analogue of protein folding. This, of course, would require considerable precision in synthesis, but theoretical developments are also necessary. We have learnt from the theory of protein folding theory that only a small fraction of possible sequences are foldable, so we will need to learn how to design foldable sequences.
Another important principle will be exploiting molecular shape change. In biology, this principle underlies the operation of most sophisticated nanoscale machines, including molecular motors, ion channel proteins and signalling molecules. In polymer physics the phenomenon of the coil-globule transition in response to changing solvent conditions is well known and has its macroscopic counterpart in thermoresponsive gels. To be widely useful, we need to engineer responsive systems with much more specific triggers and with a more highly amplified response. One promising way of doing this uses the coupling between transitions in secondary structure and global conformation; however we’re still a long way from the remarkable lever arms of biological motor proteins, in which rather subtle changes at a binding site produce a large overall mechanical response.
Some of the most powerful ideas from biology still remain essentially unexploited. An obvious one is, of course, evolution. At the molecular level, evolution offers a spectacularly powerful way of searching multidimensional parameter spaces to find efficient design solutions. It’s arguable that, given the combinatorial complexity that arises with even modest degrees of architectural control and our unfamiliarity with the design rules that are appropriate for the nanoscale environment, that significant progress will positively require some kind of evolutionary approach, whether that is executed in computer simulation or with real molecules.
Perhaps the most fundamental difference between the operating environments of biology and polymer science is the question of thermodynamic equilibrium. Polymer scientists are used to systems at, or perturbed slightly away from, equilibrium, while biological systems are driven far from equilibrium by a continuous energy input. How can we incorporate this most basic feature of life into our synthetic devices? What will be our synthetic analogue of life’s universal energy currency, adenosine triphosphate?
Ultimately, what we are talking about here is the reverse engineering of biology. It’s obvious that the gulf between the crudities of synthetic polymer science and the intricacies of cell biology is currently immense (certainly quite big enough to mean that the undoubted ethical issues that would arise if we could make any kind of reasonable facsimile of life are still very distant). Nonetheless, even rudimentary devices inspired by cell biology would be of huge practical benefit. Potentially even more significant a benefit than this, though, would be the deep understanding of the workings of biology that would arise from trying to copy it.
The operation of most living organisms, from bacteria like E. Coli to multi-cellular organisms like ourselves, depends on molecular motors. These are protein-based machines which convert chemical energy to mechanical energy; the work our muscles do depends on many billions of these nanoscale machines all operating together, while individual motors propel bacteria or move materials around inside our cells. Molecular motors work in a very different way to the motors we are familiar with on the macroscopic scale, as has been revealed by some stunning experiments combining structural biology with single molecule biophysics. A good place to start getting a feel for how they work is with these movies of biological motors from Ronald Vale at UCSF.
The motors we use at the macroscopic scale to convert chemical energy to mechanical energy are heat engines, like petrol engines and steam turbines. The fuel is first burnt to convert chemical energy to heat energy, and this heat energy is then converted to useful work. Heat engines rely on the fact that you can maintain part of the engine at a higher temperature than the general environment. For example, in a petrol engine you burn the fuel in a cylinder, and then you extract work by allowing the hot gases expand against a piston. If you made a nanoscale petrol engine, it wouldn’t work, because the heat would diffuse out of the cylinder walls, cooling the gas down before it had a chance to expand. This is because the time taken for a hot body to cool down to ambient temperature depends on the square of its size. At the nanoscale, you can’t maintain significant temperature gradients for any useful length of time, so nanoscale motors have to work at constant temperature. The way biological molecular motors do this is by exploiting molecular shape change – the power stroke is provided by a molecule changing shape in response to the binding and unbinding of the fuel molecules and their products.
In our research at Sheffield we’ve been trying to learn from nature to make crude synthetic molecular motors that operate in the same way, by using molecular shape changes. The molecule we use is a polymer with weak acidic or basic groups along the backbone. For a polyacid, for example, in acidic conditions the molecule is uncharged and hydrophobic; it takes up a collapsed, compact shape. But when the acid is neutralised, the molecule ionises and becomes much more hydrophilic, substantially expanding in size. So, in principle we could use the expansion of a single molecule to do work.
How can we clock the motor, so that rather than just expanding a single time, our molecule will repeatedly cycle between the expanded and the compact shape? In biology, this happens because the reaction of the fuel molecule is actually catalysed by the the motor molecule. Our chemistry isn’t good enough to do this yet, so we use a much cruder approach.
We use a class of chemical reactions in which the chemical conditions spontaneously oscillate, despite the fact that the reactants are added completely steadily. The most famous of these reactions is the Belousov-Zhabotinksy reaction (see here for an explanation and a video of the experiment). With the help of Steve Scott from the University of Leeds, we’ve developed an oscillating reaction in which the acidity spontaneously oscillates over a range that is sufficient to trigger a shape change in our polyacid molecules.
You can see a progress report on our efforts in a paper in Faraday Discussions 128; the abstract is here and you can download the full paper as a PDF here (this is available under the author rights policy of the Royal Society of Chemistry, who own the copyright). We’ve been able to demonstrate the molecular shape change in response to the oscillating chemical reaction at both macroscopic and single chain level in a self-assembled structure. What we’ve not yet been able to do is directly measure the force generated by a single molecule; in principle we should be able to do this with an atomic force microscope whose tip is connected to a single molecule, the other end of which is grafted to a firm surface, but this has proved rather difficult to do in practise. This is high on our list of priorities for the future, together with some ideas about how we can use this motor to do interesting things, like propel a nanoscale object or pump chemicals across a membrane.
This work is a joint effort of my group in the physics department and Tony Ryan’s group in chemistry. In physics, Mark Geoghegan, Andy Parnell, Jon Howse, Simon Martin and Lorena Ruiz-Perez have all been involved in various aspects of the project, while the chemistry has been driven by Colin Crook and Paul Topham.
I’ve been covering two big debates about nanotechnology here. One the on hand, there’s the question of the relative merits of Drexler’s essentially mechanical vision of nanotechnology and the more biologically inspired soft and biomimetic approaches. On the other, we see the efforts of campaigning groups like ETC to paint nanotechnology as the next step after genetic modification in humanity’s efforts to degrade and control the natural world. Although these debates at first sight look very different, they both revolve around issues of control and our proper relationship with the natural world.
These issues are identified and situated in a deep historical context in a very perceptive article by Bernadette Bensaude-Vincent, of the Philosophy Department in the Université Paris X. The article, Two Cultures of Nanotechnology?, is in HYLE-the International Journal for Philosophy of Chemistry, Vol. 10, No.2 (2004).
The whole article is well worth reading, but this extract gets to the heart of the matter:
“There is nothing new in the current artificialization of nature. Already in antiquity, there were two different and occasionally conflicting views of technology. On the one hand, the arts or technai were considered as working against nature, as contrary to nature. This meaning of the term para-physin provided the ground for repeated condemnations of mechanics and alchemy. On the other hand, the arts – especially agriculture, cooking, and medicine – were considered as assisting or even improving on nature by employing the dynameis or powers of nature. In the former perspective, the artisan, like Plato’s demiurgos, builds up a world by imposing his own rules and rationality on a passive matter. Technology is a matter of control. In the latter perspective the artisan is more like the ship-pilot at sea. He conducts or guides forces and processes supplied by nature, thus revealing the powers inherent in matter. Undoubtedly the mechanicist [i.e. Drexlerian] model of nanotechnology belongs to the demiurgic tradition. It is a technology fascinated by the control and the overtaking of nature.”
Bensaude-Vincent argues soft and biomimetic approaches to nanotechnology fall more naturally into that second culture, conducting or guiding forces and processes supplied by nature, thus revealing the powers inherent in matter.
One of the UK’s two flagship nanotechnology centres, the Interdisciplinary Research Collaboration in Bionanotechnology at Oxford University, was having its mid-term review yesterday; I was there in my role as a member of the external steering committee. One thing I learnt that had previously passed me by was that one of the largest industrial collaborations they have is not, as one might think, with a pharmaceutical or biomedical company, but with the Japanese telecoms company NTT.
The linkup was announced last October; the $2 million project is concentrated in the area of the study of the function of membrane proteins. Why would they be interested in this? Membrane proteins provide the mechanisms by which living cells sense their surroundings and communicate with the outside world. As the leader of the NTT side of the project, Dr Keiichi Torimitsu, is quoted as saying, “We are especially interested in this field because of the possibility of future applications in the area of human – electronic interfaces.”