The Rose of Temperaments

The colour of imaginary rain
falling forever on your old address…

Helen Mort

“The Rose of Temperaments” was a colour diagram devised by Goethe in the late 18th century, which matched colours with associated psychological and human characteristics. The artist Paul Evans has chosen this as a title for a project which forms part of Sheffield University’s Festival of the Mind; for it six poets have each written a sonnet associated with a colour. Poems by Angelina D’Roza and A.B. Jackson have already appeared on the project’s website; the other four will be published there over the next few weeks, including the piece by Helen Mort, from which my opening excerpt is taken.

Goethe’s theory of colour was a comprehensive cataloguing of the affective qualities of colours as humans perceive them, conceived in part as a reaction to the reductionism of Newton’s optics, much in the same spirit as Keats’s despair at the tendency of Newtonian philosophy to “unweave the rainbow”.

But if Newton’s aim was to remove the human dimension from the analysis of colour, he didn’t entirely succeed. In his book “Opticks”, he retains one important distinction, and leaves one unsolved mystery. He describes his famous experiments with a prism, which show that white light can be split into its component colours. But he checks himself to emphasise that when he talks about a ray of red light, he doesn’t mean that the ray itself is red; it has the property of producing the sensation of red when perceived by the eye.

The mystery is this – when we talk about “all the colours of the rainbow”, a moment’s thought tells us that a rainbow doesn’t actually contain all the colours there are. Newton recognised that the colour we now call magenta doesn’t appear in the rainbow – but it can be obtained by mixing two different colours of the rainbow, blue and red.

All this is made clear in the context of our modern physical theory of colour, which was developed in the 19th century, first by Thomas Young, and then in detail by James Clerk Maxwell. They showed, as most people know, that one can make any colour by mixing the three primary colours – red, green and blue – in different proportions.

Maxwell also deduced the reason for this – he realised that the human eye must comprise three separate types of light receptors, with different sensitivities across the visible spectrum, and that it is through the differential response of these different receptors to incident light that the brain constructs the sensation of colour. Colour, then, is not an intrinsic property of light itself, it is something that emerges from our human perception of light.

In the last few years, my group has been exploring the relationship between biology and colour from the other end, as it were. In our work on structural colour, we’ve been studying the microscopic structures that in beetle scales and bird feathers produce striking colours without pigments, through complex interference effects. We’re particularly interested in the non-iridescent colour effects that are produced by some structures that combine order and randomness in rather a striking way; our hope is to be able to understand the mechanism by which these structures form and then reproduce them in synthetic systems.

What we’ve come to realise as we speculate about the origin of these biological mechanisms is that to understand how these systems for producing biological coloration have evolved, we need to understand something about how different animals perceive colour, which is likely to be quite alien to our perceptions. Birds, for example, have not three different types of colour receptors, as humans do, but four. This means not just that birds can detect light outside human range of perception, but that the richness of their colour perception has an extra dimension.

Meanwhile, we’ve enjoyed having Paul Evans as an artist-in-residence in my group, working with my colleagues Dr Andy Parnell and Stephanie Burg on some of our x-ray scattering experiments. In addition to the poetry and colour project, Paul has put together an exhibition for Festival of the Mind, which can be seen in Sheffield’s Millennium Gallery for a week from 17th September. Paul, Andy and I will also be doing a talk about colour in art, physics and biology on September 20th, at 5 pm in the Spiegeltent, Barker’s Pool, Sheffield.

Your mind will not be uploaded

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Continue reading “Your mind will not be uploaded”

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.

A little history of bionanotechnology and nanomedicine

I wrote this piece as a briefing note in connection with a study being carried out by the Nuffield Council on Bioethics about Emerging Biotechnologies. I’m not sure whether bionanotechnology or nanomedicine should be considered as emerging biotechnologies, but this is an attempt to sketch out the connections.

Nanotechnology is not a single technology; instead it refers to a wide range of techniques and methods for manipulating matter on length scales from a nanometer or so – i.e. the typical size of molecules – to hundreds of nanometers, with the aim of creating new materials and functional devices. Some of these methods represent the incremental evolution of well-established techniques of applied physics, chemistry and materials science. In other cases, the techniques are at a much earlier state, with promises about their future power being based on simple proof-of-principle demonstrations.

Although nanotechnology has its primary roots in the physical sciences, it has always had important relationships with biology, both at the rhetorical level and in practical outcomes. The rhetorical relationship derives from the observation that the fundamental operations of cell biology take place at the nanoscale, so one might expect there to be something particularly powerful about interventions in biology that take place on this scale. Thus the idea of “nanomedicine” has been prominent in the promises made on behalf of nanotechnology from its earliest origins, and as a result has entered popular culture in the form of the exasperating but ubiquitous image of the “nanobot” – a robot vessel on the nano- or micro- scale, able to navigate through a patient’s bloodstream and effect cell-by-cell repairs. This was mentioned as a possibility in Richard Feynman’s 1959 lecture, “Plenty of Room at the Bottom”, which is widely (though retrospectively) credited as the founding manifesto of nanotechnology, but it was already at this time a common device in science fiction. The frequency with which conventionally credentialed nanoscientists have argued that this notion is impossible or impracticable, at least as commonly envisioned, has had little effect on the enduring hold it has on the popular imagination.
Continue reading “A little history of bionanotechnology and nanomedicine”

Three things that Synthetic Biology should learn from Nanotechnology

I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

1. Mind that metaphor
Metaphors in science are powerful and useful things, but they come with two dangers:
a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

2. Blowing bubbles in the economy of promises

Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

3. It’s not about risk, it’s about trust

The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.

On Descartes and nanobots

A couple of weeks ago I was interviewed for the Robots podcast special on on 50 years of robotics, and predictions for the next half century. My brief was nanorobots, and you can hear the podcast here. My pitch was that on the nanoscale we’d be looking to nature for inspiration, exploiting design principles such as self-assembly and macromolecular shape change; as a particularly exciting current development I singled out progress in DNA nanotechnology, and in particular the possibility of using this to do molecular logic. As it happens, last week’s edition of Nature included two very interesting papers reporting further developments in this area – Molecular robots guided by prescriptive landscapes from Erik Winfree’s group in Caltech, and A proximity-based programmable DNA nanoscale assembly line from Ned Seeman’s group in NYU.

The context and significance of these advances is well described in a News and Views article (full text); the references to nanorobots and nanoscale assembly lines have led to considerable publicity. James Hayton (who reads the Daily Mail so the rest of us don’t have to), in his 10e-9 blog comments very pertinently on the misleading use of classical nanobot imagery to illustrate this story. The Daily Mail isn’t the only culprit here – even the venerable Nature uses a still from the film Fantastic Voyage to illustrate their story, with the caption “although such machines are still a fantasy, molecular ‘robots’ made of DNA are under development.”

What’s wrong with these illustrations is that they are graphic representations of bad metaphors. DNA nanotechnology falls squarely in the soft nanotechnology paradigm – it depends on the weak interactions by which complementary sequences are recognised to enable the self-assembly of structures whose design is coded within the component molecules themselves, and macromolecular shape changes under the influence of Brownian motion to effect motion. Soft machines aren’t mechanical engineering shrunk, as I’ve written about at length on this blog and elsewhere.

But there’s another, more subtle point here. Our classical conception of a robot is something with sensors feeding information into a central computer, which responds to this sensory input by a computation, which is then effected by the communication of commands to the actuators that drive the robot’s actions. This separation of the “thinking” function of the robot from its sensing and action is something that we find very appealing; we are irresistibly drawn to the analogy with the way we have come to think about human beings since Descartes – as machines animated by an intelligence largely separate from our bodies.

What is striking about these rudimentary DNA robots is that what “intelligence” they possess – their capacity to sense the environment and process this information to determine which of a limited set of outcomes will be effected – arises from the molecules from which the robot is made and their interaction with a (specially designed) environment. There’s no sense in which the robot’s “program” is loaded into it; the program is implicit in the construction of the robot and its interaction with the environment. In this robot, “thought” and “action” are inseparable; the same molecules both store and process information and drive its motion.

In this, these proto-robots operate on similar general principles to bacteria, whose considerable information processing power arises from the interaction of many individual molecules with each other and with their physical environment (as beautifully described in Dennis Bray’s book Wetware: a computer in every living cell). Is this the only way to build a nanobot with the capacity to process and act on information about the environment? I’m not sure, but for the moment it seems to be the direction we’re moving in.

Targeted delivery of siRNA by nanoparticles in humans

An important milestone in the use of nanoparticles to deliver therapeutic molecules is reported in this week’s Nature – full paper (subscription required), editors summary. See also this press release. The team, led by Mark Davis from Caltech, used polymer nanoparticles to deliver small interfering RNA (siRNA) molecules into tumour cells in humans, with the aim of preventing the growth of these tumours.

I wrote in more detail about siRNA back in 2005 here. If one can introduce the appropriate siRNA molecules into a cell, they can selectively turn off the expression of any gene in that cell’s genome, potentially giving us a new class of powerful drugs which would be an absolutely specific treatment both for viral diseases and cancers. When I last wrote about this subject, it was clear that the problem of delivering of these small strands of RNA to their target cells was going to be a major barrier to fulfilling the promise of this very exciting new technology. In this paper, we see that substantial progress has been made towards overcoming this barrier. In this study the RNA was incorporated in self-assembled polymer nanoparticles, the surfaces of which were decorated with groups that selectively bind to proteins that are found on the surfaces of the tumour cells being targeted.

The experiments were carried out as part of a phase 1 clinical trial on humans. What the Nature paper shows is that the nanoparticles do indeed accumulate at tumour cells and are incorporated within them (see the micrograph below), and that the siRNA does suppress the synthesis of the particular protein at which it is aimed, a protein which is necessary for the growth of the tumour. If this trial doesn’t demonstrate unacceptable harmful effects, further clinical trials will be needed to demonstrate whether the therapy works clinically to arrest the growth of these tumours.

Targeted nanoparticles carrying therapeutic siRNA molecules entering a tumor cell - Caltech/Swaroop Mishra
Targeted nanoparticles carrying therapeutic siRNA molecules entering a tumor cell - Caltech/Swaroop Mishra

Soft machines and robots

Robots is a website featuring regular podcasts about various aspects of robotics; currently it’s featuring a podcast of an interview with me by Sabine Hauert, from EPFL’s Laboratory of Intelligent Systems. This was prompted by my talk at the IEEE Congress on Evolutionary Computing, which essentially was about how to build a nanobot. Regular readers of this blog will not be surprised to hear that a strong theme of both interview and talk is the need to take inspiration from biology when designing “soft machines”, which need to be optimised for the special, and to us very unfamiliar, physics of the nanoworld, rather than using inappropriate design principles derived from macroscopic engineering. For more on this, the interested reader might like to take a look at my earlier essay, “Right and wrong lessons from biology”.

Accelerating evolution in real and virtual worlds

Earlier this week I was in Trondheim, Norway, for the IEEE Congress on Evolutionary Computing. Evolutionary computing, as its name suggests, refers to a group of approaches to computer programming that draws inspiration from the natural processes of Darwinian evolution, hoping to capitalise on the enormous power of evolution to find good solutions to complex problems from a very large range of possibilities. How, for example, might one program a robot to carry out a variety of tasks in a changing and unpredictable environment? Rather than an attempting to anticipate all the possible scenarios that your robot might encounter, and then writing control software that specified appropriate behaviours for all these possibilities, one could use evolution to select a robot controller that worked best for your chosen task in a variety of environments.

Evolution may be very effective, but in its natural incarnation it’s also very slow. One way of speeding things up is to operate in a virtual world. I saw a number of talks in which people were using simulations of robots to do the evolution; something like a computer game environment is used to simulate a robot doing a simple task like picking up an object or recognising a shape, with success or failure being used as input in a fitness function, through which the robot controller is allowed to evolve.

Of course, you could just use a real computer game. Simon Lucas, from Essex University, explained to me why classic computer games – his favourite is Ms Pac-Man – offer really challenging exercises in developing software agents. It’s sobering to realise that, while computers can beat a chess grand master, humans still have a big edge on computers in arcade games. The human high-score for Ms Pac-Man is 921,360; in a competition in the 2008 IEEE CEC meeting the winning bot achieved 15,970. Unfortunately I had to leave Trondheim before the results of the 2009 competition were announced, so I don’t know whether this year produced a big breakthrough in this central challenge to computational intelligence.

One talk at the meeting was very definitely rooted in the real, rather than virtual, world – this came from Harris Wang, a graduate student in the group of Harvard Medical School’s George Church. This was a really excellent overview of the potential of synthetic biology. At the core of the talk was a report of a recent piece of work that is due to appear in Nature shortly. This described the re-engineering of an micro-organism to increase its production of the molecule lycopene, the dye that makes tomatoes red (and probably confers significant health benefits, the basis for the seemingly unlikely claim that tomato ketchup is good for you). Notwithstanding the rhetoric of precision and engineering design that often accompanies synthetic biology, what made this project successful was the ability to generate a great deal of genetic diversity and then very rapidly screen these variants to identify the desired changes. To achieve a 500% increase in lycopene production, they needed to make up to 24 simultaneous genetic modifications, knocking out genes involved in competing processes and modifying the regulation of other genes. This produced a space of about 15 billion possible combinatorial variations, from which they screened 100,000 distinct new cell types to find their winner. This certainly qualifies as real-world accelerated evolution.

How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”