Discussion meeting on soft nanotechnology

A forthcoming conference in London will be discussing the “soft” approach to nanotechnology. The meeting – Faraday Discussion 143 – Soft Nanotechnology – is organised by the UK’s Royal Society of Chemistry, and follows a rather unusual format. Selected participants in the meeting submit a full research paper, which is peer reviewed and circulated, before the meeting, to all the attendees. The meeting itself concentrates on a detailed discussion of the papers, rather than a simple presentation of the results.

The organisers describe the scope of the meeting in these terms: “Soft nanotechnology aims to build on our knowledge of biological systems, which are the ultimate example of ‘soft machines’, by:

  • Understanding, predicting and utilising the rules of self-assembly from the molecular to the micron-scale
  • Learning how to deal with the supply of energy into dynamically self-assembling systems
  • Implementing self-assembly and ‘wet chemistry’ into electronic devices, actuators, fluidics, and other ‘soft machines’.
  • An impressive list of invited international speakers includes Takuzo Aida, from the University of Tokyo, Chris Dobson, from the University of Cambridge, Ben Feringa, from the University of Groningen, Olli Ikkala, from Helsinki University of Technology, Chengde Mao, from Purdue University, Stefan Matile, from the University of Geneva, and Klaus J Schulten, from the University of Illinois. The conference will be wrapped up by Harvard’s George Whitesides, and I’m hugely honoured to have been asked to give the opening talk.

    The meeting is not until this time next year, in London, but if you want to present a paper you need to get an abstract in by the 11 July. Faraday Discussions in the past have featured lively discussions, to say the least; it’s a format that’s tailor made for allowing controversies to be aired and strong positions to be taken.

    Right and wrong lessons from biology

    The most compelling argument for the possibility of a radical nanotechnology, with functional devices and machines operating at the nano-level, is the existence of cell biology. But one can take different lessons from this. Drexler argued that we should expect to be able to do much better than cell biology if we applied the lessons of macroscale engineering, using mechanical engineering paradigms and hard materials. My argument, though, is that this fails to take into account the different physics of the nanoscale, and that evolution has optimised biology’s “soft machines” for this environment. This essay, first published in the journal Nature Nanotechnology (subscription required, vol 1, pp 85 – 86 (2006)), reflects on this issue.

    Nanotechnology hasn’t yet acquired a strong disciplinary identity, and as a result it is claimed by many classical disciplines. “Nanotechnology is just chemistry”, one sometimes hears, while physicists like to think that only they have the tools to understand the strange and counterintuitive behaviour of matter at the nanoscale. But biologists have perhaps the most reason to be smug – in the words of MIT’s Tom Knight “biology is the nanotechnology that works”.

    The sophisticated and intricate machinery of cell biology certainly gives us a compelling existence proof that complex machines on the nanoscale are possible. But, having accepted that biology proves that one form of nanotechnology is possible, what further lessons should be learned? There are two extreme positions, and presumably a truth that lies somewhere in between.

    The engineers’ view, if I can put it that way, is that nature shows what can be achieved with random design methods and a palette of unsuitable materials allocated by the accidents of history. If you take this point of view, it seems obvious that it should be fairly straightforward to make nanoscale machines whose performance vastly exceeds that of biology, by making rational choices of materials, rather than making do with what the accidents of evolution have provided, and by using the design principles we’ve learnt in macroscopic engineering.

    The opposite view stresses that evolution is an extremely effective way of searching parameter space, and that in consequence is that we should assume that biological design solutions are likely to be close to optimal for the environment for which they’ve evolved. Where these design solutions seem odd from our point of view, their unfamiliarity is to be ascribed to the different ways in which physics works at the nanoscale. At its most extreme, this view regards biological nanotechnology, not just as the existence proof for nanotechnology, but as an upper limit on its capabilities.

    So what, then, are the right lessons for nanotechnology to learn from biology? The design principles that biology uses most effectively are those that exploit the special features of physics at the nanoscale in an environment of liquid water. These include some highly effective uses of self-assembly, using the hydrophobic interaction, and the principle of macromolecular shape change that underlies allostery, used for both for mechanical transduction and for sensing and computing. Self-assembly, of course, is well known both in the laboratory and in industrial processes like soap-making, but synthetic examples remain very crude compared to the intricacy of protein folding. For industrial applications, biological nanotechnology offers inspiration in the area of green chemistry – promising environmentally benign processing routes to make complex, nanostructured materials based on water as a solvent and using low operating temperatures. The use of templating strategies and precursor routes widens the scope of these approaches to include final products which are insoluble in water.

    But even the most enthusiastic proponents of the biological approach to nanotechnology must concede that there are branches of nanoscale engineering that biology does not seem to exploit very fully. There are few examples of the use of coherent electron transport over distances greater than a few nanometers. Some transmembrane processes, particularly those involved in photosynthesis, do exploit electron transfer down finely engineered cascades of molecules. But until the recent discovery of electron conduction in bacterial pili, longer ranged electrical effects in biology seem to be dominated by ionic rather than electronic transport. Speculations that coherent quantum states in microtubules underlie consciousness are not mainstream, to say the least, so a physicist who insists on the central role of quantum effects in nanotechnology finds biology somewhat barren.

    It’s clear that there is more than one way to apply the lessons of biology to nanotechnology. The most direct route is that of bionanotechnology, in which the components of living systems are removed from their biological context and put to work in hybrid environments. Many examples of this approach (which NYU’s Ned Seeman has memorably called biokleptic nanotechnology) are now in the literature, using biological nanodevices such as molecular motors or photosynthetic complexes. In truth, the newly emerging field of synthetic biology, in which functionality is added back in a modular way to a stripped down host organism, is applying this philosophy at the level of systems rather than devices.

    This kind of synthetic biology is informed by what’s essentially an engineering sensibility – it is sufficient to get the system to work in a predictable and controllable way. Some physicists, though, might want to go further, taking inspiration from Richard Feynman’s slogan “What I cannot create I do not understand”. Will it be possible to have a biomimetic nanotechnology, in which the design philosophy of cell biology is applied to the creation of entirely synthetic components? Such an approach will be formidably difficult, requiring substantial advances both in the synthetic chemistry needed to create macromolecules with precisely specified architectures, and in the theory that will allow one to design molecular architectures that will yield the structure and function one needs. But it may have advantages, particularly in broadening the range of environmental conditions in which nanosystems can operate.

    The right lessons for nanotechnology to learn from biology might not always be the obvious ones, but there’s no doubting their importance. Can the traffic ever go the other way – will there be lessons for biology to learn from nanotechnology? It seems inevitable that the enterprise of doing engineering with nanoscale biological components must lead to a deeper understanding of molecular biophysics. I wonder, though, whether there might not be some deeper consequences. What separates the two extreme positions on the relevance of biology to nanotechnology is a difference in opinion on the issue of the degree to which our biology is optimal, and whether there could be other, fundamentally different kinds of biology, possibly optimised for a different set of environmental parameters. It may well be a vain expectation to imagine that a wholly synthetic nanotechnology could ever match the performance of cell biology, but even considering the possibility represents a valuable broadening of our horizons.

    Synthetic biology – summing up the debate so far

    The UK’s research council for biological sciences, the BBSRC, has published a nice overview of the potential ethical and social dimensions to the development of synthetic biology. The report – Synthetic biology: social and ethical challenges (737 KB PDF) – is by Andrew Balmer & Paul Martin at the University of Nottingham’s Institute for Science and Society.

    The different and contested definitions and visions that people have for synthetic biology are identified at the outset; the authors distinguish between four rather different conceptions of synthetic biology. There’s the Venter approach, consisting of taking a stripped-down organism with a minimal genome, and building desired functions into that. The identification of modular components and the genetic engineering of whole pathways forms a second, but related approach. Both of these visions of synthetic biology still rely on the re-engineering of existing DNA based life; a more ambitious, but much less completely realised, program for synthetic biology, attempts to make wholly artificial cells from non-biological molecules. A fourth strand, which seems less far-reaching in its ambitions, attempts to make novel biomolecules by mimicking the post-transcriptional modification of proteins that is such a source of variety in biology.

    What broader issues are likely to arise from this enterprise? The report identifies five areas to worry about. There’s the potential problems and dangers of the uncontrolled release of synthetic organisms into the biosphere; the worry of these techniques being mis-used for the creation of new pathogens for use in bioterrorism, the potential for the creation of monopolies through an unduly restrictive patenting regime, and implications for trade and global justice. Most far-reaching of all, of course, are the philosophical and cultural implications of creating artificial life, with its connotations of transgressing the “natural order”, and the problems of defining the meaning and significance of life itself.

    The recommended prescriptions fall into a well-rehearsed pattern – the need for early consideration of governance and regulation, and the desirability of carrying the public along with early public engagement and resistance to the temptation to overhype the potential applications of the technology. As ever, dialogue between scientists and civil society groups, ethicists and social scientists is recommended, a dialogue which, the authors think, will only be credible if there is a real possibility that some lines of research would be abandoned if they were considered too ethically problematical.

    Aliens from inner space? The strange story of the “nanobacteria” that probably weren’t.

    How small are the smallest living organisms? There seem to be many types of bacteria of 300 nm and upwards in diameter, but to many microbiologists it seems a rule of thumb that if something can get through a 0.2 µm filter (200 nm) it isn’t alive. Thus the discovery of so-called “nanobacteria”, with sizes between 50 nm and 200 nm, in the human blood-stream, and their putative association with a growing number of pathological conditions such as kidney stones and coronary artery disease, has been controversial. Finnish scientist Olavi Kajander, the discoverer of “nanobacteria”, presents the evidence that these objects are a hitherto undiscovered form of bacterial life in a contribution to a 1999 National Academies workshop on the size limits on very small organisms. But two recent papers give strong evidence that “nanobacteria” are simply naturally formed inorganic nanoparticles.

    In the first of these papers, Nanobacteria Are Mineralo Fetuin Complexes, in the February 2008 issue of PLoS Pathogens, Didier Raoult, Patricio Renesto and their coworkers from Marseilles report a comprehensive analysis of “nanobacteria” cultured in calf serum. Their results show that “nanobacteria” are nanoparticles, predominantly of the mineral hydroxyapatite, associated with proteins, particularly a serum protein called fetuin. Crucially, though, they failed to find definitive evidence that the “nanobacteria” contained any DNA. In the absence of DNA, these objects cannot be bacteria. Instead, these authors say they are “self-propagating mineral-fetuin complexes that we propose to call “nanons.””

    A more recent article, in the April 8 2008 edition of PNAS, Purported nanobacteria in human blood as calcium carbonate nanoparticles (abstract, subscription required for full article), casts further doubt on the nanobacteria hypothesis. These authors, Jan Martel and John Ding-E Young, from Chang Gung University in Taiwan and Rockefeller University, claim to be able to reproduce nanoparticles indistinguishable from “nanobacteria” simply by combining chemicals which precipitate calcium carbonate – chalk – in cell culture medium. Some added human serum is needed in the medium, suggesting that blood proteins are required to produce the characteristic “nanobacteria” morphology rather than a more conventional crystal form.

    So, it seems the case is closed… “nanobacteria” are nothing more than naturally occurring, inorganic nanoparticles, in which the precipitation and growth of simple inorganic compounds such as calcium carbonate is modified by the adsorption of biomolecules at the growing surfaces to give particles with the appearance of very small single celled organisms. These natural nanoparticles may or may not have relevance to some human diseases. This conclusion does leave a more general question in my mind, though. It’s clear that the presence of nucleic acids is a powerful way of detecting hitherto unknown microorganisms, and the absence of nucleic acids here is powerful evidence that these nanoparticles are not in fact bacteria. But it’s possible to imagine a system that is alive, at least by some definitions, that has a system of replication that does not depend on DNA at all. Graham Cairns-Smith’s book Seven Clues to the Origin to Life offers some thought provoking possibilities for systems of this kind as precursors to life on earth, and exobiologists have contemplated the possibility of non-DNA based life on other planets. If some kind of primitive life without DNA, perhaps based on some kind of organic/inorganic hybrid system akin to Cairns-Smith’s proposal, did exist on earth today, we would be quite hard-pressed to detect it. I make no claim that these “nanobacteria” represent such a system, but the long controversy over their true nature does make it clear that deciding whether a system is being living or abiotic in the absence of evidence from nucleic acids could be quite difficult.

    Watching an assembler at work

    The only software-controlled molecular assembler we know about is the ribosome – the biological machine that reads the sequence of bases on a strand of messenger RNA, and, converting this genetic code into a sequence of amino acids, synthesises the protein molecule that corresponds to the gene whose information was transferred by the RNA. An article in this week’s Nature (abstract, subscription required for full paper, see also this editor’s summary) describes a remarkable experimental study of the way the RNA molecule is pulled through the ribosome as each step of its code is read and executed. This experimental tour-de-force of single molecule biophysics, whose first author is Jin-Der Wen, comes from the groups of Ignacio Tinoco and Carlos Bustamante at Berkeley.

    The experiment starts by tethering a strand of RNA between two micron-size polystyrene beads. One bead is held firm on a micropipette, while the other bead is held in an optical trap – the point at which a highly focused laser beam has its maximum intensity. The central part of the RNA molecule is twisted into a single hairpin, and the ribosome binds to the RNA just to one side of this hairpin. As the ribosome reads the RNA molecule, it pulls the hairpin apart, and the resulting lengthening of the RNA strand is directly measured from the change in position of the anchoring bead in its optical trap. What’s seen is a series of steps – the ribosome moves about 2.7 nm in about a tenth of a second, then pauses for a couple of seconds before making another step.

    This distance corresponds exactly to the size of the triplet of bases that represent a single character of the genetic code – the codon. What we are seeing, then, is the ribosome pausing on a codon to read it, before pulling the tape through to read the next character. What we don’t see in this experiment, though we know it’s happening, is the addition of a single amino acid to the growing protein chain during this read step. This takes place by means of the binding to RNA codon, within the ribosome, of a shorter strand of RNA – the transfer RNA – to which the amino acid is attached. What the experiment does make clear that the operation of this machine is by no means mechanical and regular. The times taken for the ribosome to move from the reading position for one codon to the next – the translocation times – are fairly tightly distributed around an average value of around 0.08 seconds, but the dwell times on each codon vary from a fraction of a second up to a few seconds. Occasionally the ribosome stops entirely for a few minutes.

    This experiment is far from the final word on the way ribosomes operate. I can imagine, for example, that people are going to be making strenuous efforts to attach a probe directly to the ribosome, rather than, as was done here, inferring its motion from the location of the end of the RNA strand. But it’s fascinating to have such a direct probe of one of the most central operations of biology. And for those attempting the very ambitious task of creating a synthetic analogue of a ribosome, these insights will be invaluable.

    The right size for nanomedicine

    One reason nanotechnology and medicine potentially make a good marriage is that the size of nano-objects is very much on the same length scale as the basic operations of cell biology; nanomedicine, therefore, has the potential to make direct interventions on living systems at the sub-cellular level. A paper in the current issue of Nature Nanotechnology (abstract, subscription required for full article) gives a very specific example, showing that the size of a drug-nanoparticle assembly directly affects how effective the drug works in controlling cell growth and death in tumour cells.

    In this work, the authors bound a drug molecule to a nanoparticle, and looked at the way the size of the nanoparticle affected the interaction of the drug with receptors on the surface of target cells. The drug was herceptin, a protein molecule which binds to a receptor molecule called ErbB2 on the surface of cells from human breast cancer. Cancerous cells have too many of these receptors, and this affects the communications between different cells which tell cells whether to grow, or which marks cells for apoptosis – programmed cell death. What the authors found was that herceptin attached to gold nanoparticles was more effective than free herceptin at binding to the receptors; this then led to reduced growth rates for the treated tumour cells. But how well the effect works depends strongly on how big the nanoparticles are – best results are found for nanoparticles 40 or 50 nm in size, with 100 nm nanoparticles being barely more effective than the free drug.

    What the authors think is going on is connected to the process of endocytosis, by which nanoscale particles can be engulfed by the cell membrane. Very small nanoparticles typically only have one herceptin molecule attached, so they behave much like free drug – one nanoparticle binds to one receptor. 50 nm nanoparticles have a number of herceptin molecules attached, so a single nanoparticle links together a number of receptors, and the entire complex, nanoparticles and receptors, is engulfed by the cell and taken out of the cell signalling process completely. 100 nm nanoparticles are too big to be engulfed, so only that fraction of the attached drug molecules in contact with the membrane can bind to receptors. A commentary (subscription required) by Mauro Ferrari sets this achievement in context, pointing out that a nanodrug needs to do four things: successfully navigate through the bloodstream, negotiate any biological barriers preventing it from getting it where it needs to go, locate the cell that is its target, and then to modify the pathological cellular processes that underly the disease being treated. We already know that nano-particle size is hugely important for the first three of these requirements, but this work directly connects size to the sub-cellular processes that are the target of nanomedicine.

    Drew Endy on Engineering Biology

    Martyn Amos draws our attention to a revealing interview from MIT’s Drew Endy about the future of synthetic biology. While Craig Venter up to now monopolised the headlines about synthetic biology, Endy has an original and thought-provoking take on the subject.

    Endy is quite clear about his goals: “The underlying goal of synthetic biology is to make biology easy to engineer.” In pursuing this, he looks to the history of engineering, recognising the importance of things like interchangeable parts and standard screw gauges, and seeks a similar library of modular components for biological systems. Of course, this approach must take for granted that when components are put together they behave in predictable ways: “Engineers hate complexity. I hate emergent properties. I like simplicity. I don’t want the plane I take tomorrow to have some emergent property while it’s flying.” Quite right, of course, but since many suspect that life itself is an emergent property one could wonder how much of biology will be left after you’ve taken the emergence out.

    Many people will have misgivings about the synthetic biology enterprise, but Endy is an eloquent proponent of the benefits of applying hacker culture to biology: “Programming DNA is more cool, it’s more appealing, it’s more powerful than silicon. You have an actual living, reproducing machine; it’s nanotechnology that works. It’s not some Drexlarian (Eric Drexler) fantasy. And we get to program it. And it’s actually a pretty cheap technology. You don’t need a FAB Lab like you need for silicon wafers. You grow some stuff up in sugar water with a little bit of nutrients. My read on the world is that there is tremendous pressure that’s just started to be revealed around what heretofore has been extraordinarily limited access to biotechnology.”

    His answer to societal worries about the technology, then, is an confidence in the power of open source ideals, common ownership rather than corporate monopoly for the intellectual property, and an assurance that an open technology will automatically be applied to solve pressing societal problems.

    There are legitimate questions about this vision of synthetic biology, both as to whether it is possible and whether it is wise. But to get some impression of the strength of the driving forces pushing this way, take a look at this recent summary of trends in DNA synthesis and sequencing. “Productivity of DNA synthesis technologies has increased approximately 7,000-fold over the past 15 years, doubling every 14 months. Costs of gene synthesis per bases pair have fallen 50-fold, halving every 32 months.” Whether this leads to synthetic biology in the form anticipated by Drew Endy, the breakthrough into the mainstream of DNA nanotechnology, or something quite unexpected, it’s difficult to imagine this rapid technological development not having far-reaching consequences.

    Grand challenges for UK nanotechnology

    The UK’s Engineering and Physical Sciences Research Council introduced a new strategy for nanotechnology last year, and some of the new measures proposed are beginning to come into effect (including, of course, my own appointment as the Senior Strategic Advisor for Nanotechnology). Just before Christmas the Science Minister announced the funding allocations for research for the next few years. Nanotechnology is one of six priority programmes that cut across all the Research Councils (to be precise, the cross-council programme has the imposing title: Nanoscience through Engineering to Application).

    One strand of the strategy involves the funding of large scale integrated research programmes in areas where nanotechnology can contribute to issues of pressing societal or economic need. The first of these Grand Challenges – in the area of using nanotechnology to enable cheap, efficient and scalable ways to harvest solar energy – was launched last summer. An announcement on which proposals will be funded will be made within the next few months.

    The second grand challenge will be launched next summer, and it will be in the general area of nanotechnology for healthcare. This is a very broad theme, of course – I discussed some of the potential areas, which include devices for delivering drugs and for rapid diagnosis, in an earlier post. To narrow the area down, there’s going to be an extensive process of consultation with researchers and people in the relevant industries – for details, see the EPSRC website. There’ll also be a role for public engagement; EPSRC is commissioning a citizens’ jury to consider the options and have an input into the decision of what area to focus on.

    Delivering genes

    Gene therapy holds out the promise of correcting a number of diseases whose origin lies in the deficiency of a particular gene – given our growing knowledge of the human genome, and our ability to synthesise arbitrary sequences of DNA, one might think that the introduction of new genetic material into cells to remedy the effects of abnormal genes would be straightforward. This isn’t so. DNA is a relatively delicate molecule, and organisms have evolved efficient mechanisms for finding and eliminating foreign DNA. Viruses, on the other hand, whose entire modus operandi is to introduce foreign nucleic acids into cells, have evolved effective ways of packaging their payloads of DNA or RNA into cells. One approach to gene therapy co-opts viruses to deliver the new genetic material, though this sometimes has unpredicted and undesirable side-effects. So an effective, non-viral method of wrapping up DNA, introducing it into target cells and releasing it would be very desirable. My colleagues at Sheffield University, led by Beppe Battaglia, have recently demonstrated an effective and elegant way of introducing DNA into cells, in work recently reported in the journal Advanced Materials (subscription required for full paper).

    The technique is based on the use of polymersomes, which I’ve described here before. Polymersomes are bags formed when detergent-like polymer molecules self-assemble to form a membrane which folds round on itself to form a closed surface. They are analogous to the cell membranes of biology, which are formed from soap-like molecules called phospholipids, and the liposomes that can be made in the laboratory from the same materials. Liposomes are used to wrap up and deliver molecules in some commercial applications already, including some drug delivery systems and in some expensive cosmetics. They’ve also been used in the laboratory to deliver DNA into cells, though they aren’t ideal for this purpose, as they aren’t very robust. Polymersomes allow one a great deal more flexibility in designing polymersomes with the properties one needs, and this flexibility is exploited to the full in Battaglia’s experiments.

    To make a polymersome, one needs a block copolymer – a polymer with two or three chemically distinct sections joined together. One of these blocks needs to be hydrophobic, and one needs to be hydrophilic. The block copolymers used here, developed and synthesised in the group of Sheffield chemist Steve Armes, have two very nice features. The hydrophilic section is composed of poly(2-(methacryloyloxy)ethyl phosphorylcholine) – this is a synthetic polymer that presents the same chemistry to the adjoining solution as a naturally occurring phospholipid in a cell membrane. This means that polymersomes made from this material are able to circulate undetected within the body for longer than other water soluble polymers. The hydrophobic block is poly(2-(diisopropylamino)ethyl methacrylate). This is a weak base, so it has the property that its state of ionisation depends on the acidity of the solution. In a basic solution, it is un-ionized, and in this state it is strongly hydrophobic, while in an acidic solution it becomes charged, and in this state it is much more soluble in water. This means that polymersomes made from this material will be stable in neutral or basic conditions, but will fall apart in acid. Conversely, if one has the polymers in an acidic solution, together with the DNA one wants to deliver, and then neutralises the solution, polymersomes will spontaneously form, encapsulating the DNA.

    The way these polymersomes work to introduce DNA into cells is sketched in the diagram below. On encountering a cell, the polymersome triggers the process of endocytosis, whereby the cell engulfs the polymersome in a little piece of cell membrane that is pinched off inside the cell. It turns out that the solution inside these endosomes is significantly more acidic than the surroundings, and this triggers the polymersome to fall apart, releasing its DNA. This, in turn, generates an osmotic pressure sufficient to burst open the endosome, releasing the DNA into the cell interior, where it is free to make its way to the nucleus.

    The test of the theory is to see whether one can introduce a section of DNA into a cell and then demonstrate how effectively the corresponding gene is expressed. The DNA used in these experiments was the gene that codes for a protein that fluoresces – the famous green fluorescent protein, GFP, originally obtained from certain jelly-fish – making it easy to detect whether the protein coded for by the introduced gene has actually been made. In experiments using cultured human skin cells, the fraction of cells in which the new gene was introduced was very high, while few toxic effects were observed, in contrast to a control experiment using an existing, commercially available gene delivery system, which was both less effective at introducing genes and actually killed a significant fraction of the cells.

    Polymersome endocytosis
    A switchable polymersome as a vehicle for gene delivery. Beppe Battaglia, University of Sheffield.

    Venter in the Guardian

    The front page of yesterday’s edition of the UK newspaper the Guardian was, unusually, dominated by a science story: I am creating artificial life, declares US gene pioneer. The occasion for the headline was an interview with Craig Venter, who fed them a pre-announcement that they had successfully managed to transplant a wholly synthetic genome into a stripped down bacterium, replacing its natural genetic code by an artificial one. In the newspaper’s somewhat breathless words: “The Guardian can reveal that a team of 20 top scientists assembled by Mr Venter, led by the Nobel laureate Hamilton Smith, has already constructed a synthetic chromosome, a feat of virtuoso bio-engineering never previously achieved. Using lab-made chemicals, they have painstakingly stitched together a chromosome that is 381 genes long and contains 580,000 base pairs of genetic code.”

    We’ll see what, in detail, has been achieved when the work is properly published. It’s significant, though, that this story was felt to be important enough to occupy most of the front page of a major UK newspaper at a time of some local political drama. Craig Venter is visiting the UK later this month, so we can expect the current mood of excitement or foreboding around synthetic biology to continue for a while yet.