How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

How cells decide

One of the most important recent conceptual advances in biology, in my opinion, is the realization that much of the business carried out by the nanoscale machinery of the cell is as much about processing information as processing matter. Dennis Bray pointed out, in an important review article (8.4 MB PDF) published in Nature in 1995, that mechanisms such as allostery, by which the catalytic activity of an enzyme can be switched on and off by the binding of another molecule, mean that proteins can form the components of logic gates, which themselves can be linked together to form biochemical circuits. These information processing networks can take information about the environment from sensors at the cell surface, compute an appropriate action, and modify the cell’s behaviour in response. My eye was recently caught by a paper from 2008 which illustrates rather nicely how it is that the information processing capacity of a single cell can be quite significant.

The paper – Emergent decision-making in biological signal transduction networks (abstract, subscription required for full article in PNAS), comes from Tomáš Helikar, John Konvalina, Jack Heidel, and Jim A. Rogers at the University of Nebraska. What these authors have done is construct a large scale, realistic model of a cell signalling network in a generic eukaryotic cell. To do this, they’ve mined the literature for data on 130 different network nodes. Each node represents a protein; in a crucial simplification they reduce the complexities of the biochemistry to simple Boolean logic – the node is either on or off, depending on whether the protein is active or not, and for each node there is a truth table expressing the interactions of that node with other proteins. For some more complicated cases, a single protein may be represented by more than one node, expressing the fact that there may be a number of different modified states.

This model of the cell takes in information from the outside world; sensors at the cell membrane measure the external concentration of growth factors, extracellular matrix proteins, and calcium levels. This is the input to the cell’s information processing system. The outputs of the systems are essentially decisions by the cell about what to do in response to its environment. The key result of the simulations is that the network can take a wide variety of input signals, often including random noise, and for each combination of inputs produce one of a small number of biologically appropriate responses – as the authors write, “this nonfuzzy partitioning of a space of random, noisy, chaotic inputs into a small number of equivalence classes is a hallmark of a pattern recognition machine and is strong evidence that signal transduction networks are decision-making systems that process information obtained at the membrane rather than simply passing unmodified signals downstream.”

Natural complexity, engineering simplicity

One of the things that makes mass production possible is the large-scale integration of nearly identical parts. Much engineering design is based on this principle, which is taken to extremes in microelectronics; a modern microprocessor will contain several hundred million transistors, every one of which needs to be manufactured to very high tolerances if the device is to work at all. One might think that similar considerations would apply to biology. After all, the key components of biological nanotechnology – the proteins that are the key components of most of the nanoscale machinery of the cell – are specified by the genetic code down to the last atom, and in many cases are folded in a unique three dimensional configuration. It turns out, though, that this is not the case; biology actually has sophisticated mechanisms whose entire purpose is to introduce extra variation into its components.

This point was forcefully made by Dennis Bray in an article in Science magazine in 2003: called Molecular Prodigality (PDF version from Bray’s own website). Protein sequences can be chopped and changed, after the DNA code has been read, by processes of RNA editing and splicing and other types of post-translational modification, and these can lead to distinct changes in the operation of machines made from these proteins. Bray cites as an example the potassium channels in squid nerve axons; one of the component proteins can be altered by RNA editing in up to 13 distinct places, changing the channel’s operating parameters. He calculates that the random combination of all these possibilities means that there are 4.5 ×1015 subtly different possible types of potassium channels. This isn’t an isolated example; Bray estimates that up to a half of human structural genes allow some such variation, with the brain and nervous system being particularly rich in molecular diversity.

It isn’t at all clear what all this variation is for, if anything. One can speculate that some of this variability has evolved to increase the adaptability of organisms to unpredictable changes in environmental conditions. This is certainly true for the case of the adaptive immune system. A human has the ability to make 1012 different types of antibody, using combinatorial mechanisms to generate a huge library of different molecules, each of which has the potential to recognise characteristic target molecules on pathogens that we’ve yet to be exposed to. This is an example of biology’s inherent complexity; human engineering, in contrast, strives for simplicity.

Will nanotechnology lead to a truly synthetic biology?

This piece was written in response to an invitation from the management consultants McKinsey to contribute to a forthcoming publication discussing the potential impacts of biotechnology in the coming century. This is the unedited version, which is quite a lot longer than the version that will be published.

The discovery of an alien form of life would be discovery of the century, with profound scientific and philosophical implications. Within the next fifty years, there’s a serious chance that we’ll make this discovery, not by finding life on a distant planet or indeed by such aliens visiting us on earth, but by creating this new form of life ourselves. This will be the logical conclusion of using the developing tools of nanotechnology to develop a “bottom-up” version of synthetic biology, which instead of rearranging and redesigning the existing components of “normal” biology, as currently popular visions of synthetic biology propose, uses the inspiration of biology to synthesise entirely novel systems.

Life on earth is characterised by a stupendous variety of external forms and ways of life. To us, it’s the differences between mammals like us and insects, trees and fungi that seem most obvious, while there’s a vast variety of other unfamiliar and invisible organisms that are outside our everyday experience. Yet, underneath all this variety there’s a common set of components that underlies all biology. There’s a common genetic code, based on the molecule DNA, and in the nanoscale machinery that underlies the operation of life, based on proteins, there are remarkable continuities between organisms that on the surface seem utterly different. That all life is based on the same type of molecular biology – with information stored in DNA, transcribed through RNA to be materialised in the form of machines and enzymes made out of proteins – reflects the fact that all the life we know about has evolved from a common ancestor. Alien life is a staple of science fiction, of course, and people have speculated for many years that if life evolved elsewhere it might well be based on an entirely different set of basic components. Do developments of nanotechnology and synthetic biology mean that we can go beyond speculation to experiment?

Certainly, the emerging discipline of synthetic biology is currently attracting excitement and foreboding in equal measure. It’s important to realise, though, that in the most extensively promoted visions of synthetic biology now, what’s proposed isn’t making entirely new kinds of life. Rather than aiming to make a new type of wholly synthetic alien life, what is proposed is to radically re-engineer existing life forms. In one vision, it is proposed to identify in living systems independent parts or modules, that could be reassembled to achieve new, radically modified organisms that can deliver some desired outcome, for example synthesising a particularly complicated molecule. In one important example of this approach, researchers at Lawrence Berkeley National Laboratory developed a strain of E. coli that synthesises a precursor to artmesinin, a potent (and expensive) anti-malarial drug. In a sense, this field is a reaction to the discovery that genetic modification of organisms is more difficult than previously thought; rather than being able to get what one wants from an organism by altering a single gene, one often needs to re-engineer entire regulatory and signalling pathways. In these complex processes, protein molecules – enzymes – essentially function as molecular switches, which respond to the presence of other molecules by initiating further chemical changes. It’s become commonplace to make analogies between these complex chemical networks and electronic circuits, and in this analogy this kind of synthetic biology can be thought of as the wholesale rewiring of the (biochemical) circuits which control the operation of an organism. The well-publicised proposals of Craig Venter are even more radical – their project is to create a single-celled organism that has been slimmed down to have only the minimal functions consistent with life, and then to replace its genetic material with a new, entirely artificial, genome created in the lab from synthetic DNA. The analogy used here is that one is “rebooting” the cell with a new “operating system”. Dramatic as this proposal sounds, though, the artificial life-form that would be created would still be based on the same biochemical components as natural life. It might be synthetic life, but it’s not alien.

So what would it take to make a synthetic life-form that was truly alien? In principle, it seems difficult to argue that this wouldn’t be possible in principle – as we learn more about the details of the way cell biology works, we can see that it is intricate and marvellous, but in no sense miraculous – it’s based on machinery that operates on principles consistent with the way we know physical laws operate on the nano-scale. These principles, it should be said, are very different to the ones that underlie the sorts of engineering we are used to on the macro-scale; nanotechnologists have a huge amount to learn from biology. But we are already seeing very crude examples of synthetic nanostructures and devices that use some of the design principles of biology – designed molecules that self-assemble to make molecular bags that resemble cell membranes; pores that open and close to let molecules in and out of these enclosures, molecules that recognise other molecules and respond by changes in shape. It’s quite conceivable to imagine these components being improved and integrated into systems. One could imagine a proto-cell, with pores controlling traffic of molecules in and out of it, containing an network of molecules and machines that together added up to a metabolism, taking in energy and chemicals from the environment and using them to make the components needed for the system to maintain itself, grow and perhaps reproduce.

Would such a proto-cell truly constitute an artificial alien-life form? The answer to this question, of course, depends on how we define life. But experimental progress in this direction will itself help answer this thorny question, or at least allow us to pose it more precisely. The fundamental problem we have when trying to talk about the properties of life in general, is that we only know about a single example. Only when we have some examples of alien life will it be possible to talk about the general laws, not of biology, but of all possible biologies. The quest to make artificial alien life will teach us much about the origins of our kind of life. Experimental research into the origins of life consists of an attempt to rerun the origins of our kind of life in the early history of earth, and is in effect an attempt to create artificial alien life from those molecules that can plausibly be argued to have been present on the early earth. Using nanotechnology to make a functioning proto-cell should be an easier task than this, as we don’t have to restrict ourselves to the kinds of materials that were naturally occurring on the early earth.

Creating artificial alien life would be a breathtaking piece of science, but it’s natural to ask whether it would have any practical use. The selling point of the most currently popular visions of synthetic biology is that they will permit us to do difficult chemical transformations in much more effective ways – making hydrogen from sunlight and water, for example, or making complex molecules for pharmaceutical uses. Conventional life, including the modifications proposed by synthetic biology, operates only in a restricted range of environments, so it’s possible to imagine that one could make a type of alien life that operated in quite different environments – at high temperatures, in liquid metals, for example – opening up entirely different types of chemistry. These utilitarian considerations, though, pale in comparison to what would be implied more broadly if we made a technology that had a life of its own.

A shadow biosphere?

Where are we most likely to find truly alien life? The obvious (though difficult) place to look is on another planet or moon, whether that’s under the icy crust of Europa, near the poles of Mars, or, perhaps, on one of the planets we’re starting to discover orbiting distant stars. Alternatively, we might be able to make alien life for ourselves, through the emerging discipline of bottom-up synthetic biology. But what if alien life is to be found right under our noses, right here on earth, forming a kind of shadow biosphere? This provocative and fascinating hypothesis has been suggested by philosopher Carol Cleland and biologist Shelley Copley, both from the University of Colorado, Boulder, in their article “The possibility of alternative microbial life on Earth” (PDF, International Journal of Astrobiology 4, pp. 165-173, 2005).

The obvious objection to this suggestion is that if such alien life existed, we’d have noticed it by now. But, if it did exist, how would we know? We’d be hard pressed to find it simply by looking under a microscope – alien microbial life, if its basic units were structured on the micro- or nano- scale, would be impossible to distinguish just by appearance from the many forms of normal microbial life, or for that matter from all sorts of structures formed by inorganic processes. One of the surprises of modern biology is the huge number of new kinds of microbes that are discovered when, instead on relying on culturing microbes to identify them, one directly amplifies and sequences their nucleic acids. But suppose there exists a class of life-forms whose biochemistry fundamentally differs from the system based on nucleic acids and proteins that all “normal” life depends on – life-forms whose genetic information is coded in a fundamentally different way. There’s a strong assumption that early in the ancestry of our current form of biology, before the evolution of the current DNA based genetic code, a simpler form of life must have existed. So if descendants of this earlier form of life still exist on the earth, or if life on earth emerged more than once and some of the alternative versions still exist, detection methods that assume that life must involve nucleic acids will not help us at all. Just as, until the development of the polymerase chain reaction as a tool for detecting unculturable microbes, we have been able to detect only a tiny fraction of the microbes that surround us, it’s all too plausible that if alien life did exist around us we would not currently be able to detect it.

To find such alien life would be the scientific discovery of the century. We’d like to be able to make general statements about life in general – how it is to be defined, what are the general laws, not of biology but of all possible biologies, and, perhaps, how can one design and build new types of life. But we find it difficult to do this at the moment, as we only know about one type of life and it’s hard to generalise from a single example. Even if it didn’t succeed, the effort of seriously looking for alien life on earth would be hugely rewarding in forcing us to broaden our notions of the various, very different, manifestations that life might take.

From micro to nano for medical applications

I spent yesterday at a meeting at the Institute of Mechanical Engineers, Nanotechnology in Medicine and Biotechnology, which raised the question of what is the right size for new interventions in medicine. There’s an argument that, since the basic operations of cell biology take place on the nano-scale, that’s fundamentally the right scale for intervening in biology. On the other hand, given that many current medical interventions are very macroscopic, operating on the micro-scale may already offer compelling advantages.

A talk from Glasgow University’s Jon Cooper gave some nice examples illustrating this. His title was Integrating nanosensors with lab-on-a-chip for biological sensing in health technologies, and he began with some true nanotechnology. This involved a combination of fluid handling systems for very small volumes with nanostructured surfaces, with the aim of detecting single biomolecules. This depends on a remarkable effect known as surface enhanced Raman scattering. Raman scattering is a type of spectroscopy that can detect chemical groups with what is normally rather low sensitivity. But if one illuminates metals with very sharp asperities, this hugely magnifies the light field very close to the surface, increasing sensitivity by a factor of ten million or so. Systems based on this effect, using silver nanoparticles coated so that pathogens like anthrax will stick to them, are already in commercial use. But Cooper’s group uses, not free nano-particles, but very precisely structured nanosurfaces. Using electron beam lithography his group creates silver split-ring resonators – horseshoe shapes about 160 nm across. With a very small gap one can get field enhancements of a factor of one hundred billion, and it’s this that brings single molecule detection into prospect.

On a larger scale, Cooper described systems to probe the response of single cells – his example involved using a single heart cell (a cardiomyocyte) to screen responses to potential heart drugs. This involved a pico-litre scale microchamber adjacent to an array of micron size thermocouples, which allow one to monitor the metabolism of the cell as it responds to a drug candidate. His final example was on the millimeter scale, though its sensors incorporated nanotechnology at some level. This was a wireless device incorporating an electrochemical blood sensor – the idea was that one would swallow this to screen for early signs of bowel cancer. Here’s an example where, obviously, smaller would be better, but how small does one need to go?

What the public think about nanomedicine

A major new initiative on the use of nanotechnology in medicine and healthcare has recently been launched by the UK government’s research councils; around £30 million (US$60 million) is expected to be available for large scale “Grand Challenge” style projects. The closing date for the first call has just gone by, so we will see in a few months how the research community has responded to this opportunity. What’s worth commenting on now, though, is the extent to which public engagement has been integrated into the process by which the call has been defined.

As the number of potential applications of nanotechnology to healthcare is very large, and the funds available relatively limited, there was a need to focus the call on just one or two areas; in the end the call is for applications of nanotechnology in healthcare diagnostics and the targeted delivery of therapeutic agents. As part of the program of consultations with researchers, clinicians and industry people that informed the decision to focus the call in this way, a formal public engagement exercise was commissioned to get an understanding of the hopes and fears the public have about the potential use of nanotechnology in medicine and healthcare. The full report on this public dialogue has just been published by EPSRC, and this is well worth reading.

I’ll be writing in more detail later both about the specific findings of the dialogue, and on the way the results of this public dialogue was incorporated in the decision-making process. Here, I’ll just draw out three points from the report:

  • As has been found by other public engagement exercises, there is a great deal of public enthusiasm for the potential uses of nanotechnology in healthcare, and a sense that this is an application that needs to be prioritised over some others.
  • People value potential technologies that empower them to have more control over their own health and their own lives, while potential technologies that reduce their sense of control are viewed with more caution.
  • People have concerns about who benefits from new technologies – while people generally see nothing intrinsically wrong with business driving nanotechnology, there’s a concern that public investment in science results in the appropriate public value.
  • Discussion meeting on soft nanotechnology

    A forthcoming conference in London will be discussing the “soft” approach to nanotechnology. The meeting – Faraday Discussion 143 – Soft Nanotechnology – is organised by the UK’s Royal Society of Chemistry, and follows a rather unusual format. Selected participants in the meeting submit a full research paper, which is peer reviewed and circulated, before the meeting, to all the attendees. The meeting itself concentrates on a detailed discussion of the papers, rather than a simple presentation of the results.

    The organisers describe the scope of the meeting in these terms: “Soft nanotechnology aims to build on our knowledge of biological systems, which are the ultimate example of ‘soft machines’, by:

  • Understanding, predicting and utilising the rules of self-assembly from the molecular to the micron-scale
  • Learning how to deal with the supply of energy into dynamically self-assembling systems
  • Implementing self-assembly and ‘wet chemistry’ into electronic devices, actuators, fluidics, and other ‘soft machines’.
  • An impressive list of invited international speakers includes Takuzo Aida, from the University of Tokyo, Chris Dobson, from the University of Cambridge, Ben Feringa, from the University of Groningen, Olli Ikkala, from Helsinki University of Technology, Chengde Mao, from Purdue University, Stefan Matile, from the University of Geneva, and Klaus J Schulten, from the University of Illinois. The conference will be wrapped up by Harvard’s George Whitesides, and I’m hugely honoured to have been asked to give the opening talk.

    The meeting is not until this time next year, in London, but if you want to present a paper you need to get an abstract in by the 11 July. Faraday Discussions in the past have featured lively discussions, to say the least; it’s a format that’s tailor made for allowing controversies to be aired and strong positions to be taken.

    Right and wrong lessons from biology

    The most compelling argument for the possibility of a radical nanotechnology, with functional devices and machines operating at the nano-level, is the existence of cell biology. But one can take different lessons from this. Drexler argued that we should expect to be able to do much better than cell biology if we applied the lessons of macroscale engineering, using mechanical engineering paradigms and hard materials. My argument, though, is that this fails to take into account the different physics of the nanoscale, and that evolution has optimised biology’s “soft machines” for this environment. This essay, first published in the journal Nature Nanotechnology (subscription required, vol 1, pp 85 – 86 (2006)), reflects on this issue.

    Nanotechnology hasn’t yet acquired a strong disciplinary identity, and as a result it is claimed by many classical disciplines. “Nanotechnology is just chemistry”, one sometimes hears, while physicists like to think that only they have the tools to understand the strange and counterintuitive behaviour of matter at the nanoscale. But biologists have perhaps the most reason to be smug – in the words of MIT’s Tom Knight “biology is the nanotechnology that works”.

    The sophisticated and intricate machinery of cell biology certainly gives us a compelling existence proof that complex machines on the nanoscale are possible. But, having accepted that biology proves that one form of nanotechnology is possible, what further lessons should be learned? There are two extreme positions, and presumably a truth that lies somewhere in between.

    The engineers’ view, if I can put it that way, is that nature shows what can be achieved with random design methods and a palette of unsuitable materials allocated by the accidents of history. If you take this point of view, it seems obvious that it should be fairly straightforward to make nanoscale machines whose performance vastly exceeds that of biology, by making rational choices of materials, rather than making do with what the accidents of evolution have provided, and by using the design principles we’ve learnt in macroscopic engineering.

    The opposite view stresses that evolution is an extremely effective way of searching parameter space, and that in consequence is that we should assume that biological design solutions are likely to be close to optimal for the environment for which they’ve evolved. Where these design solutions seem odd from our point of view, their unfamiliarity is to be ascribed to the different ways in which physics works at the nanoscale. At its most extreme, this view regards biological nanotechnology, not just as the existence proof for nanotechnology, but as an upper limit on its capabilities.

    So what, then, are the right lessons for nanotechnology to learn from biology? The design principles that biology uses most effectively are those that exploit the special features of physics at the nanoscale in an environment of liquid water. These include some highly effective uses of self-assembly, using the hydrophobic interaction, and the principle of macromolecular shape change that underlies allostery, used for both for mechanical transduction and for sensing and computing. Self-assembly, of course, is well known both in the laboratory and in industrial processes like soap-making, but synthetic examples remain very crude compared to the intricacy of protein folding. For industrial applications, biological nanotechnology offers inspiration in the area of green chemistry – promising environmentally benign processing routes to make complex, nanostructured materials based on water as a solvent and using low operating temperatures. The use of templating strategies and precursor routes widens the scope of these approaches to include final products which are insoluble in water.

    But even the most enthusiastic proponents of the biological approach to nanotechnology must concede that there are branches of nanoscale engineering that biology does not seem to exploit very fully. There are few examples of the use of coherent electron transport over distances greater than a few nanometers. Some transmembrane processes, particularly those involved in photosynthesis, do exploit electron transfer down finely engineered cascades of molecules. But until the recent discovery of electron conduction in bacterial pili, longer ranged electrical effects in biology seem to be dominated by ionic rather than electronic transport. Speculations that coherent quantum states in microtubules underlie consciousness are not mainstream, to say the least, so a physicist who insists on the central role of quantum effects in nanotechnology finds biology somewhat barren.

    It’s clear that there is more than one way to apply the lessons of biology to nanotechnology. The most direct route is that of bionanotechnology, in which the components of living systems are removed from their biological context and put to work in hybrid environments. Many examples of this approach (which NYU’s Ned Seeman has memorably called biokleptic nanotechnology) are now in the literature, using biological nanodevices such as molecular motors or photosynthetic complexes. In truth, the newly emerging field of synthetic biology, in which functionality is added back in a modular way to a stripped down host organism, is applying this philosophy at the level of systems rather than devices.

    This kind of synthetic biology is informed by what’s essentially an engineering sensibility – it is sufficient to get the system to work in a predictable and controllable way. Some physicists, though, might want to go further, taking inspiration from Richard Feynman’s slogan “What I cannot create I do not understand”. Will it be possible to have a biomimetic nanotechnology, in which the design philosophy of cell biology is applied to the creation of entirely synthetic components? Such an approach will be formidably difficult, requiring substantial advances both in the synthetic chemistry needed to create macromolecules with precisely specified architectures, and in the theory that will allow one to design molecular architectures that will yield the structure and function one needs. But it may have advantages, particularly in broadening the range of environmental conditions in which nanosystems can operate.

    The right lessons for nanotechnology to learn from biology might not always be the obvious ones, but there’s no doubting their importance. Can the traffic ever go the other way – will there be lessons for biology to learn from nanotechnology? It seems inevitable that the enterprise of doing engineering with nanoscale biological components must lead to a deeper understanding of molecular biophysics. I wonder, though, whether there might not be some deeper consequences. What separates the two extreme positions on the relevance of biology to nanotechnology is a difference in opinion on the issue of the degree to which our biology is optimal, and whether there could be other, fundamentally different kinds of biology, possibly optimised for a different set of environmental parameters. It may well be a vain expectation to imagine that a wholly synthetic nanotechnology could ever match the performance of cell biology, but even considering the possibility represents a valuable broadening of our horizons.