Soft machines and robots

Robots is a website featuring regular podcasts about various aspects of robotics; currently it’s featuring a podcast of an interview with me by Sabine Hauert, from EPFL’s Laboratory of Intelligent Systems. This was prompted by my talk at the IEEE Congress on Evolutionary Computing, which essentially was about how to build a nanobot. Regular readers of this blog will not be surprised to hear that a strong theme of both interview and talk is the need to take inspiration from biology when designing “soft machines”, which need to be optimised for the special, and to us very unfamiliar, physics of the nanoworld, rather than using inappropriate design principles derived from macroscopic engineering. For more on this, the interested reader might like to take a look at my earlier essay, “Right and wrong lessons from biology”.

Accelerating evolution in real and virtual worlds

Earlier this week I was in Trondheim, Norway, for the IEEE Congress on Evolutionary Computing. Evolutionary computing, as its name suggests, refers to a group of approaches to computer programming that draws inspiration from the natural processes of Darwinian evolution, hoping to capitalise on the enormous power of evolution to find good solutions to complex problems from a very large range of possibilities. How, for example, might one program a robot to carry out a variety of tasks in a changing and unpredictable environment? Rather than an attempting to anticipate all the possible scenarios that your robot might encounter, and then writing control software that specified appropriate behaviours for all these possibilities, one could use evolution to select a robot controller that worked best for your chosen task in a variety of environments.

Evolution may be very effective, but in its natural incarnation it’s also very slow. One way of speeding things up is to operate in a virtual world. I saw a number of talks in which people were using simulations of robots to do the evolution; something like a computer game environment is used to simulate a robot doing a simple task like picking up an object or recognising a shape, with success or failure being used as input in a fitness function, through which the robot controller is allowed to evolve.

Of course, you could just use a real computer game. Simon Lucas, from Essex University, explained to me why classic computer games – his favourite is Ms Pac-Man – offer really challenging exercises in developing software agents. It’s sobering to realise that, while computers can beat a chess grand master, humans still have a big edge on computers in arcade games. The human high-score for Ms Pac-Man is 921,360; in a competition in the 2008 IEEE CEC meeting the winning bot achieved 15,970. Unfortunately I had to leave Trondheim before the results of the 2009 competition were announced, so I don’t know whether this year produced a big breakthrough in this central challenge to computational intelligence.

One talk at the meeting was very definitely rooted in the real, rather than virtual, world – this came from Harris Wang, a graduate student in the group of Harvard Medical School’s George Church. This was a really excellent overview of the potential of synthetic biology. At the core of the talk was a report of a recent piece of work that is due to appear in Nature shortly. This described the re-engineering of an micro-organism to increase its production of the molecule lycopene, the dye that makes tomatoes red (and probably confers significant health benefits, the basis for the seemingly unlikely claim that tomato ketchup is good for you). Notwithstanding the rhetoric of precision and engineering design that often accompanies synthetic biology, what made this project successful was the ability to generate a great deal of genetic diversity and then very rapidly screen these variants to identify the desired changes. To achieve a 500% increase in lycopene production, they needed to make up to 24 simultaneous genetic modifications, knocking out genes involved in competing processes and modifying the regulation of other genes. This produced a space of about 15 billion possible combinatorial variations, from which they screened 100,000 distinct new cell types to find their winner. This certainly qualifies as real-world accelerated evolution.

How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

How cells decide

One of the most important recent conceptual advances in biology, in my opinion, is the realization that much of the business carried out by the nanoscale machinery of the cell is as much about processing information as processing matter. Dennis Bray pointed out, in an important review article (8.4 MB PDF) published in Nature in 1995, that mechanisms such as allostery, by which the catalytic activity of an enzyme can be switched on and off by the binding of another molecule, mean that proteins can form the components of logic gates, which themselves can be linked together to form biochemical circuits. These information processing networks can take information about the environment from sensors at the cell surface, compute an appropriate action, and modify the cell’s behaviour in response. My eye was recently caught by a paper from 2008 which illustrates rather nicely how it is that the information processing capacity of a single cell can be quite significant.

The paper – Emergent decision-making in biological signal transduction networks (abstract, subscription required for full article in PNAS), comes from Tomáš Helikar, John Konvalina, Jack Heidel, and Jim A. Rogers at the University of Nebraska. What these authors have done is construct a large scale, realistic model of a cell signalling network in a generic eukaryotic cell. To do this, they’ve mined the literature for data on 130 different network nodes. Each node represents a protein; in a crucial simplification they reduce the complexities of the biochemistry to simple Boolean logic – the node is either on or off, depending on whether the protein is active or not, and for each node there is a truth table expressing the interactions of that node with other proteins. For some more complicated cases, a single protein may be represented by more than one node, expressing the fact that there may be a number of different modified states.

This model of the cell takes in information from the outside world; sensors at the cell membrane measure the external concentration of growth factors, extracellular matrix proteins, and calcium levels. This is the input to the cell’s information processing system. The outputs of the systems are essentially decisions by the cell about what to do in response to its environment. The key result of the simulations is that the network can take a wide variety of input signals, often including random noise, and for each combination of inputs produce one of a small number of biologically appropriate responses – as the authors write, “this nonfuzzy partitioning of a space of random, noisy, chaotic inputs into a small number of equivalence classes is a hallmark of a pattern recognition machine and is strong evidence that signal transduction networks are decision-making systems that process information obtained at the membrane rather than simply passing unmodified signals downstream.”

Natural complexity, engineering simplicity

One of the things that makes mass production possible is the large-scale integration of nearly identical parts. Much engineering design is based on this principle, which is taken to extremes in microelectronics; a modern microprocessor will contain several hundred million transistors, every one of which needs to be manufactured to very high tolerances if the device is to work at all. One might think that similar considerations would apply to biology. After all, the key components of biological nanotechnology – the proteins that are the key components of most of the nanoscale machinery of the cell – are specified by the genetic code down to the last atom, and in many cases are folded in a unique three dimensional configuration. It turns out, though, that this is not the case; biology actually has sophisticated mechanisms whose entire purpose is to introduce extra variation into its components.

This point was forcefully made by Dennis Bray in an article in Science magazine in 2003: called Molecular Prodigality (PDF version from Bray’s own website). Protein sequences can be chopped and changed, after the DNA code has been read, by processes of RNA editing and splicing and other types of post-translational modification, and these can lead to distinct changes in the operation of machines made from these proteins. Bray cites as an example the potassium channels in squid nerve axons; one of the component proteins can be altered by RNA editing in up to 13 distinct places, changing the channel’s operating parameters. He calculates that the random combination of all these possibilities means that there are 4.5 ×1015 subtly different possible types of potassium channels. This isn’t an isolated example; Bray estimates that up to a half of human structural genes allow some such variation, with the brain and nervous system being particularly rich in molecular diversity.

It isn’t at all clear what all this variation is for, if anything. One can speculate that some of this variability has evolved to increase the adaptability of organisms to unpredictable changes in environmental conditions. This is certainly true for the case of the adaptive immune system. A human has the ability to make 1012 different types of antibody, using combinatorial mechanisms to generate a huge library of different molecules, each of which has the potential to recognise characteristic target molecules on pathogens that we’ve yet to be exposed to. This is an example of biology’s inherent complexity; human engineering, in contrast, strives for simplicity.

Will nanotechnology lead to a truly synthetic biology?

This piece was written in response to an invitation from the management consultants McKinsey to contribute to a forthcoming publication discussing the potential impacts of biotechnology in the coming century. This is the unedited version, which is quite a lot longer than the version that will be published.

The discovery of an alien form of life would be discovery of the century, with profound scientific and philosophical implications. Within the next fifty years, there’s a serious chance that we’ll make this discovery, not by finding life on a distant planet or indeed by such aliens visiting us on earth, but by creating this new form of life ourselves. This will be the logical conclusion of using the developing tools of nanotechnology to develop a “bottom-up” version of synthetic biology, which instead of rearranging and redesigning the existing components of “normal” biology, as currently popular visions of synthetic biology propose, uses the inspiration of biology to synthesise entirely novel systems.

Life on earth is characterised by a stupendous variety of external forms and ways of life. To us, it’s the differences between mammals like us and insects, trees and fungi that seem most obvious, while there’s a vast variety of other unfamiliar and invisible organisms that are outside our everyday experience. Yet, underneath all this variety there’s a common set of components that underlies all biology. There’s a common genetic code, based on the molecule DNA, and in the nanoscale machinery that underlies the operation of life, based on proteins, there are remarkable continuities between organisms that on the surface seem utterly different. That all life is based on the same type of molecular biology – with information stored in DNA, transcribed through RNA to be materialised in the form of machines and enzymes made out of proteins – reflects the fact that all the life we know about has evolved from a common ancestor. Alien life is a staple of science fiction, of course, and people have speculated for many years that if life evolved elsewhere it might well be based on an entirely different set of basic components. Do developments of nanotechnology and synthetic biology mean that we can go beyond speculation to experiment?

Certainly, the emerging discipline of synthetic biology is currently attracting excitement and foreboding in equal measure. It’s important to realise, though, that in the most extensively promoted visions of synthetic biology now, what’s proposed isn’t making entirely new kinds of life. Rather than aiming to make a new type of wholly synthetic alien life, what is proposed is to radically re-engineer existing life forms. In one vision, it is proposed to identify in living systems independent parts or modules, that could be reassembled to achieve new, radically modified organisms that can deliver some desired outcome, for example synthesising a particularly complicated molecule. In one important example of this approach, researchers at Lawrence Berkeley National Laboratory developed a strain of E. coli that synthesises a precursor to artmesinin, a potent (and expensive) anti-malarial drug. In a sense, this field is a reaction to the discovery that genetic modification of organisms is more difficult than previously thought; rather than being able to get what one wants from an organism by altering a single gene, one often needs to re-engineer entire regulatory and signalling pathways. In these complex processes, protein molecules – enzymes – essentially function as molecular switches, which respond to the presence of other molecules by initiating further chemical changes. It’s become commonplace to make analogies between these complex chemical networks and electronic circuits, and in this analogy this kind of synthetic biology can be thought of as the wholesale rewiring of the (biochemical) circuits which control the operation of an organism. The well-publicised proposals of Craig Venter are even more radical – their project is to create a single-celled organism that has been slimmed down to have only the minimal functions consistent with life, and then to replace its genetic material with a new, entirely artificial, genome created in the lab from synthetic DNA. The analogy used here is that one is “rebooting” the cell with a new “operating system”. Dramatic as this proposal sounds, though, the artificial life-form that would be created would still be based on the same biochemical components as natural life. It might be synthetic life, but it’s not alien.

So what would it take to make a synthetic life-form that was truly alien? In principle, it seems difficult to argue that this wouldn’t be possible in principle – as we learn more about the details of the way cell biology works, we can see that it is intricate and marvellous, but in no sense miraculous – it’s based on machinery that operates on principles consistent with the way we know physical laws operate on the nano-scale. These principles, it should be said, are very different to the ones that underlie the sorts of engineering we are used to on the macro-scale; nanotechnologists have a huge amount to learn from biology. But we are already seeing very crude examples of synthetic nanostructures and devices that use some of the design principles of biology – designed molecules that self-assemble to make molecular bags that resemble cell membranes; pores that open and close to let molecules in and out of these enclosures, molecules that recognise other molecules and respond by changes in shape. It’s quite conceivable to imagine these components being improved and integrated into systems. One could imagine a proto-cell, with pores controlling traffic of molecules in and out of it, containing an network of molecules and machines that together added up to a metabolism, taking in energy and chemicals from the environment and using them to make the components needed for the system to maintain itself, grow and perhaps reproduce.

Would such a proto-cell truly constitute an artificial alien-life form? The answer to this question, of course, depends on how we define life. But experimental progress in this direction will itself help answer this thorny question, or at least allow us to pose it more precisely. The fundamental problem we have when trying to talk about the properties of life in general, is that we only know about a single example. Only when we have some examples of alien life will it be possible to talk about the general laws, not of biology, but of all possible biologies. The quest to make artificial alien life will teach us much about the origins of our kind of life. Experimental research into the origins of life consists of an attempt to rerun the origins of our kind of life in the early history of earth, and is in effect an attempt to create artificial alien life from those molecules that can plausibly be argued to have been present on the early earth. Using nanotechnology to make a functioning proto-cell should be an easier task than this, as we don’t have to restrict ourselves to the kinds of materials that were naturally occurring on the early earth.

Creating artificial alien life would be a breathtaking piece of science, but it’s natural to ask whether it would have any practical use. The selling point of the most currently popular visions of synthetic biology is that they will permit us to do difficult chemical transformations in much more effective ways – making hydrogen from sunlight and water, for example, or making complex molecules for pharmaceutical uses. Conventional life, including the modifications proposed by synthetic biology, operates only in a restricted range of environments, so it’s possible to imagine that one could make a type of alien life that operated in quite different environments – at high temperatures, in liquid metals, for example – opening up entirely different types of chemistry. These utilitarian considerations, though, pale in comparison to what would be implied more broadly if we made a technology that had a life of its own.

A shadow biosphere?

Where are we most likely to find truly alien life? The obvious (though difficult) place to look is on another planet or moon, whether that’s under the icy crust of Europa, near the poles of Mars, or, perhaps, on one of the planets we’re starting to discover orbiting distant stars. Alternatively, we might be able to make alien life for ourselves, through the emerging discipline of bottom-up synthetic biology. But what if alien life is to be found right under our noses, right here on earth, forming a kind of shadow biosphere? This provocative and fascinating hypothesis has been suggested by philosopher Carol Cleland and biologist Shelley Copley, both from the University of Colorado, Boulder, in their article “The possibility of alternative microbial life on Earth” (PDF, International Journal of Astrobiology 4, pp. 165-173, 2005).

The obvious objection to this suggestion is that if such alien life existed, we’d have noticed it by now. But, if it did exist, how would we know? We’d be hard pressed to find it simply by looking under a microscope – alien microbial life, if its basic units were structured on the micro- or nano- scale, would be impossible to distinguish just by appearance from the many forms of normal microbial life, or for that matter from all sorts of structures formed by inorganic processes. One of the surprises of modern biology is the huge number of new kinds of microbes that are discovered when, instead on relying on culturing microbes to identify them, one directly amplifies and sequences their nucleic acids. But suppose there exists a class of life-forms whose biochemistry fundamentally differs from the system based on nucleic acids and proteins that all “normal” life depends on – life-forms whose genetic information is coded in a fundamentally different way. There’s a strong assumption that early in the ancestry of our current form of biology, before the evolution of the current DNA based genetic code, a simpler form of life must have existed. So if descendants of this earlier form of life still exist on the earth, or if life on earth emerged more than once and some of the alternative versions still exist, detection methods that assume that life must involve nucleic acids will not help us at all. Just as, until the development of the polymerase chain reaction as a tool for detecting unculturable microbes, we have been able to detect only a tiny fraction of the microbes that surround us, it’s all too plausible that if alien life did exist around us we would not currently be able to detect it.

To find such alien life would be the scientific discovery of the century. We’d like to be able to make general statements about life in general – how it is to be defined, what are the general laws, not of biology but of all possible biologies, and, perhaps, how can one design and build new types of life. But we find it difficult to do this at the moment, as we only know about one type of life and it’s hard to generalise from a single example. Even if it didn’t succeed, the effort of seriously looking for alien life on earth would be hugely rewarding in forcing us to broaden our notions of the various, very different, manifestations that life might take.

From micro to nano for medical applications

I spent yesterday at a meeting at the Institute of Mechanical Engineers, Nanotechnology in Medicine and Biotechnology, which raised the question of what is the right size for new interventions in medicine. There’s an argument that, since the basic operations of cell biology take place on the nano-scale, that’s fundamentally the right scale for intervening in biology. On the other hand, given that many current medical interventions are very macroscopic, operating on the micro-scale may already offer compelling advantages.

A talk from Glasgow University’s Jon Cooper gave some nice examples illustrating this. His title was Integrating nanosensors with lab-on-a-chip for biological sensing in health technologies, and he began with some true nanotechnology. This involved a combination of fluid handling systems for very small volumes with nanostructured surfaces, with the aim of detecting single biomolecules. This depends on a remarkable effect known as surface enhanced Raman scattering. Raman scattering is a type of spectroscopy that can detect chemical groups with what is normally rather low sensitivity. But if one illuminates metals with very sharp asperities, this hugely magnifies the light field very close to the surface, increasing sensitivity by a factor of ten million or so. Systems based on this effect, using silver nanoparticles coated so that pathogens like anthrax will stick to them, are already in commercial use. But Cooper’s group uses, not free nano-particles, but very precisely structured nanosurfaces. Using electron beam lithography his group creates silver split-ring resonators – horseshoe shapes about 160 nm across. With a very small gap one can get field enhancements of a factor of one hundred billion, and it’s this that brings single molecule detection into prospect.

On a larger scale, Cooper described systems to probe the response of single cells – his example involved using a single heart cell (a cardiomyocyte) to screen responses to potential heart drugs. This involved a pico-litre scale microchamber adjacent to an array of micron size thermocouples, which allow one to monitor the metabolism of the cell as it responds to a drug candidate. His final example was on the millimeter scale, though its sensors incorporated nanotechnology at some level. This was a wireless device incorporating an electrochemical blood sensor – the idea was that one would swallow this to screen for early signs of bowel cancer. Here’s an example where, obviously, smaller would be better, but how small does one need to go?

What the public think about nanomedicine

A major new initiative on the use of nanotechnology in medicine and healthcare has recently been launched by the UK government’s research councils; around £30 million (US$60 million) is expected to be available for large scale “Grand Challenge” style projects. The closing date for the first call has just gone by, so we will see in a few months how the research community has responded to this opportunity. What’s worth commenting on now, though, is the extent to which public engagement has been integrated into the process by which the call has been defined.

As the number of potential applications of nanotechnology to healthcare is very large, and the funds available relatively limited, there was a need to focus the call on just one or two areas; in the end the call is for applications of nanotechnology in healthcare diagnostics and the targeted delivery of therapeutic agents. As part of the program of consultations with researchers, clinicians and industry people that informed the decision to focus the call in this way, a formal public engagement exercise was commissioned to get an understanding of the hopes and fears the public have about the potential use of nanotechnology in medicine and healthcare. The full report on this public dialogue has just been published by EPSRC, and this is well worth reading.

I’ll be writing in more detail later both about the specific findings of the dialogue, and on the way the results of this public dialogue was incorporated in the decision-making process. Here, I’ll just draw out three points from the report:

  • As has been found by other public engagement exercises, there is a great deal of public enthusiasm for the potential uses of nanotechnology in healthcare, and a sense that this is an application that needs to be prioritised over some others.
  • People value potential technologies that empower them to have more control over their own health and their own lives, while potential technologies that reduce their sense of control are viewed with more caution.
  • People have concerns about who benefits from new technologies – while people generally see nothing intrinsically wrong with business driving nanotechnology, there’s a concern that public investment in science results in the appropriate public value.