Synthetic biology – summing up the debate so far

The UK’s research council for biological sciences, the BBSRC, has published a nice overview of the potential ethical and social dimensions to the development of synthetic biology. The report – Synthetic biology: social and ethical challenges (737 KB PDF) – is by Andrew Balmer & Paul Martin at the University of Nottingham’s Institute for Science and Society.

The different and contested definitions and visions that people have for synthetic biology are identified at the outset; the authors distinguish between four rather different conceptions of synthetic biology. There’s the Venter approach, consisting of taking a stripped-down organism with a minimal genome, and building desired functions into that. The identification of modular components and the genetic engineering of whole pathways forms a second, but related approach. Both of these visions of synthetic biology still rely on the re-engineering of existing DNA based life; a more ambitious, but much less completely realised, program for synthetic biology, attempts to make wholly artificial cells from non-biological molecules. A fourth strand, which seems less far-reaching in its ambitions, attempts to make novel biomolecules by mimicking the post-transcriptional modification of proteins that is such a source of variety in biology.

What broader issues are likely to arise from this enterprise? The report identifies five areas to worry about. There’s the potential problems and dangers of the uncontrolled release of synthetic organisms into the biosphere; the worry of these techniques being mis-used for the creation of new pathogens for use in bioterrorism, the potential for the creation of monopolies through an unduly restrictive patenting regime, and implications for trade and global justice. Most far-reaching of all, of course, are the philosophical and cultural implications of creating artificial life, with its connotations of transgressing the “natural order”, and the problems of defining the meaning and significance of life itself.

The recommended prescriptions fall into a well-rehearsed pattern – the need for early consideration of governance and regulation, and the desirability of carrying the public along with early public engagement and resistance to the temptation to overhype the potential applications of the technology. As ever, dialogue between scientists and civil society groups, ethicists and social scientists is recommended, a dialogue which, the authors think, will only be credible if there is a real possibility that some lines of research would be abandoned if they were considered too ethically problematical.

On expertise

Whose advice should we trust when we need to make judgements about difficult political questions with a technical component? Science sociologists Harry Collins and Robert Evans, of Cardiff University, believe that this question of expertise is the most important issue facing science studies at the moment. I review their book on the subject, Rethinking Expertise, in an article – Spot the physicist – in this month’s Physics World.

Coming soon (or not)

Are we imminently facing The Singularity? This is the hypothesised moment of technological transcendence, in the concept introduced by mathematician and science fiction writer Vernor Vinge, when accelerating technological change leads to a recursively self-improving artificial intelligence of superhuman capabilities, with literally unknowable and ineffable consequences. The most vocal proponent of this eschatology is Ray Kurzweil, whose promotion of the idea takes to the big screen this year with the forthcoming release of the film The Singularity is Near.

Kurzweil describes the run-up to The Singularity: “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it … Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity … and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.” Where will we go from here? To The Singularity – “We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it”. This will take place, according to Kurzweil, in the year 2045.

The film is to be a fast-paced documentary, but to leaven the interviews with singularitarian thinkers like Aubrey de Gray, Eric Drexler and Eliezer Yudkowsky, there’ll be a story-line to place the technology in social context. This follows the touching struggle of Kurzweil’s female, electronic alter ego, Ramona, to achieve full person-hood, foiling an attack of self-replicating nanobots on the way, before finally being coached to pass a Turing test by self-help guru Tony Robbins.

For those who might prefer a less dramatic discussion of The Singularity, IEEE Spectrum, is running a special report on the subject in its June edition. According to a press release (via Nanowerk), “the editors invited articles from half a dozen people who have worked on and written about subjects central to the singularity idea in all its loopy glory. They encompass not just hardware and wetware but also economics, consciousness, robotics, nanotechnology, and philosophy.” One of those writers is me; my article, on nanotechnology, ended up with the title “Rupturing the Nanotech Rapture”.

We’ll need to wait a week or two to read the articles – I’m particularly looking forward to reading Christof Koch’s article on machine consciousness, and Alfred Nordmann’s argument against “technological fabulism” and the “baseless extrapolations” it rests on. Of course, even before reading the articles, transhumanist writer Michael Anissimov fears the worst.

Update. The IEEE Spectrum Singularity Special is now online, including my article, Rupturing the Nanotech Rapture. (Thanks to Steven for pointing this out in the comments).

Asbestos-like toxicity of some carbon nanotubes

It has become commonplace amongst critics of nanotechnology to compare carbon nanotubes to asbestos, on the basis that they are both biopersistent, inorganic fibres with a high aspect ratio. Asbestos is linked to a number of diseases, most notably the incurable cancer mesothelioma, of which there are currently 2000 new cases a year in the UK. A paper published in Nature Nanotechnology today, from Ken Donaldson’s group at the University of Edinburgh, provides the best evidence to date that some carbon nanotubes – specifically, multi-wall nanotubes longer than 20 µm or so – do lead to the same pathogenic effects in the mesothelium as asbestos fibres.

The basis of toxicity of asbestos and other fibrous materials is now reasonably well understood; their toxicity is based on the physical nature of the materials, rather than their chemical composition. In particular, fibres are expected to be toxic if they are long – longer than about 20 µm – and rigid. The mechanism of this pathogenicity is believed to be related to frustrated phagocytosis. Phagocytes are the cells whose job it is to engulf and destroy intruders – when they detect a foreign body like a fibre, they attempt to engulf it, but are unable to complete this process if the fibre is too long and rigid. Instead they release a burst of toxic products, which have no effect on the fibre but instead cause damage to the surrounding tissues. There is every reason to expect this mechanism to be active for nanotubes which are sufficiently long and rigid.

Donaldson’s group tested the hypothesis that long carbon nanotubes would have a similar effect to asbestos by injecting nanotubes into the peritoneal cavity of mice to expose the mesothelium directly to nanotubes, and then directly monitor the response. This is a proven assay for the initial toxic effects of asbestos that subsequently lead to the cancer mesothelioma.

Four multiwall nanotube samples were studied. Two of these samples had long fibres – one was a commercial sample, from Mitsui, and another was produced in a UK academic laboratory. The other two samples had short, tangled fibres, and were commercial materials from NanoLab Inc, USA. The nanotubes were compared to two controls of long and short fibre amosite asbestos and one of non-fibrous, nanoparticulate carbon black. The two nanotube samples containing a significant fraction of long (>20 µm) nanotubes, together with the long-fibre amosite asbestos, produced a characteristic pathological response of inflammation, the production of foreign body giant cells, and the development of granulomas, a characteristic lesion. The nanotubes with short fibres, like the short fibre asbestos sample and the carbon black, produced little or no pathogenic effect. A number of other controls provide good evidence that it is indeed the physical form of the nanotubes rather than any contaminants that leads to the pathogenic effect.

The key finding, then, is that not all carbon nanotubes are equal when it comes to their toxicity. Long nanotubes produce an asbestos-like response, while short nanotubes, and particulate graphene-like materials don’t produce this response. The experiments don’t directly demonstrate the development of the cancer mesothelioma, but it would be reasonable to suppose this would be the eventual consequence of the pathogenic changes observed.

The experiments do seem to rule out a role for other possible contributing factors (presence of metallic catalyst residues), but they do not address whether other mechanisms of toxicity might be important for short nanotubes.

Most importantly, the experiments do not say anything about issues of dose and exposure. In the experiments, the nanotubes were directly injected into peritoneal cavity; to establish whether environmental or workplace exposure to nanotubes present a danger one needs to know how likely it is that realistic exposures of inhaled nanotubes would lead to enough nanotubes crossing the lungs through to the mesothelium lead to toxic effects. This is the most urgent question now waiting for further research.

It isn’t clear what proportion of the carbon nanotubes now being produced on industrial, or at least pilot plant, scale, would have the characteristics – particularly in their length – that would lead to the risk of these toxic effects. However, those nanotubes that are already in the market-place are mostly in the form of advanced composites, in which the nanotubes are tightly bound in a resin matrix, so it seems unlikely that these will pose an immediate danger. We need, with some urgency, research into what might happen to the nanotubes in such products over their whole lifecycle, including after disposal.

Lichfield lecture

Tomorrow I’m giving a public lecture in the Garrick Theatre, Lichfield, under the auspices of the Lichfield Science and Engineering Society. Non-members are welcome.

Lichfield is a small city in the English Midlands; it’s of ancient foundation, but in recent times has been eclipsed by the neighbouring industrial centres of Birmingham and the Black Country. Nonetheless, it should occupy at least an interesting footnote in the history of science and technology. It was the home of Erasmus Darwin, who deserves to be known for more than simply being the grandfather of Charles Darwin. Erasmus Darwin (1731 – 1802) was a doctor and polymath; his own original contributions to science were relatively slight, though his views on evolution prefigured in some ways those of his grandson. But he was at the centre of a remarkable circle of scientists, technologists and industrialists, the Lunar Society, who between them laid many of the foundations of modernity. Their members included the chemist, Joseph Priestly, discoverer of oxygen, Josiah Wedgwood, whose ceramics factory developed many technical innovations, and Matthew Boulton and James Watt, who between them take much of the credit for the widespread industrial use of efficient steam power. In attitude they were non-conformist in religion – Priestley was a devout Unitarian, who combined thoroughgoing materialism with a conviction that the millennium was not far away, but Erasmus Darwin verged close to atheism. Their politics was radical – dangerously so, at a time when the example of the American and French revolutions led to a climate of fear and repression in England.

The painting depicts another travelling science lecturer demonstrating the new technology of the air pump to a society audience in the English Midlands. The painter, Joseph Wright, from Derby, was a friend of Erasmus Darwin, and the full moon visible through the window is probably a reference to the Lunar Society, many of whose members Wright was well acquainted with. Aside from its technical brilliance the painting captures both the conviction of some in those enlightenment times that public experimental demonstrations would provide a basis of agreed truth at a time of political and religious turbulence, and, perhaps, a suggestion that this knowledge was after all not without moral ambiguity.

An experiment on a bird in an air pump
An experiment on a bird in an air pump, by Joseph Wright, 1768. The original is in the National Gallery.

Aliens from inner space? The strange story of the “nanobacteria” that probably weren’t.

How small are the smallest living organisms? There seem to be many types of bacteria of 300 nm and upwards in diameter, but to many microbiologists it seems a rule of thumb that if something can get through a 0.2 µm filter (200 nm) it isn’t alive. Thus the discovery of so-called “nanobacteria”, with sizes between 50 nm and 200 nm, in the human blood-stream, and their putative association with a growing number of pathological conditions such as kidney stones and coronary artery disease, has been controversial. Finnish scientist Olavi Kajander, the discoverer of “nanobacteria”, presents the evidence that these objects are a hitherto undiscovered form of bacterial life in a contribution to a 1999 National Academies workshop on the size limits on very small organisms. But two recent papers give strong evidence that “nanobacteria” are simply naturally formed inorganic nanoparticles.

In the first of these papers, Nanobacteria Are Mineralo Fetuin Complexes, in the February 2008 issue of PLoS Pathogens, Didier Raoult, Patricio Renesto and their coworkers from Marseilles report a comprehensive analysis of “nanobacteria” cultured in calf serum. Their results show that “nanobacteria” are nanoparticles, predominantly of the mineral hydroxyapatite, associated with proteins, particularly a serum protein called fetuin. Crucially, though, they failed to find definitive evidence that the “nanobacteria” contained any DNA. In the absence of DNA, these objects cannot be bacteria. Instead, these authors say they are “self-propagating mineral-fetuin complexes that we propose to call “nanons.””

A more recent article, in the April 8 2008 edition of PNAS, Purported nanobacteria in human blood as calcium carbonate nanoparticles (abstract, subscription required for full article), casts further doubt on the nanobacteria hypothesis. These authors, Jan Martel and John Ding-E Young, from Chang Gung University in Taiwan and Rockefeller University, claim to be able to reproduce nanoparticles indistinguishable from “nanobacteria” simply by combining chemicals which precipitate calcium carbonate – chalk – in cell culture medium. Some added human serum is needed in the medium, suggesting that blood proteins are required to produce the characteristic “nanobacteria” morphology rather than a more conventional crystal form.

So, it seems the case is closed… “nanobacteria” are nothing more than naturally occurring, inorganic nanoparticles, in which the precipitation and growth of simple inorganic compounds such as calcium carbonate is modified by the adsorption of biomolecules at the growing surfaces to give particles with the appearance of very small single celled organisms. These natural nanoparticles may or may not have relevance to some human diseases. This conclusion does leave a more general question in my mind, though. It’s clear that the presence of nucleic acids is a powerful way of detecting hitherto unknown microorganisms, and the absence of nucleic acids here is powerful evidence that these nanoparticles are not in fact bacteria. But it’s possible to imagine a system that is alive, at least by some definitions, that has a system of replication that does not depend on DNA at all. Graham Cairns-Smith’s book Seven Clues to the Origin to Life offers some thought provoking possibilities for systems of this kind as precursors to life on earth, and exobiologists have contemplated the possibility of non-DNA based life on other planets. If some kind of primitive life without DNA, perhaps based on some kind of organic/inorganic hybrid system akin to Cairns-Smith’s proposal, did exist on earth today, we would be quite hard-pressed to detect it. I make no claim that these “nanobacteria” represent such a system, but the long controversy over their true nature does make it clear that deciding whether a system is being living or abiotic in the absence of evidence from nucleic acids could be quite difficult.

How to think about science studies

I’ve been passing my driving time recently listening to the podcasts of an excellent series from the Canadian Broadcasting Corporation, called How to think about science. It’s simply a series of long interviews with academics, generally from the field of science studies. I’ve particularly enjoyed the interviews with historian of science Simon Schaffer, sociologists Ulrich Beck and Brian Wynne, science studies guru Bruno Latour, and Evelyn Fox Keller, who has written some interesting books about some of the tacit philosophies underlying modern biology. With one or two exceptions, even those interviews with people I find less convincing still provided me with a few thought provoking insights .

That strange academic interlude, the “science wars”, gets the occasional mention – this was the time when claims from science studies about the importance of social factors in the construction of scientific knowledge provoked a fierce counter-attack from people anxious to defend science against what they saw as an attack on its claims to objective truth. My perception is that the science wars ended in an armistice, though there are undoubtedly some people still holding out in the jungle, unaware that the war is over. Although the series is clearly presented from the science studies side of the argument, most contributors reflect the terms of the peace treaty, accepting the claims of science to be a way of generating perhaps uniquely reliable knowledge, while still insisting on the importance of the social in the way that knowledge is constructed, and criticising inappropriate ways of using scientific or pseudo-scientific arguments, models and metaphors in public discourse.

USA lagging Europe in nanotechnology risk research

How much resource is being devoted to assessing the potential risks of the nanotechnologies that are currently at or close to market? Not nearly enough, say campaigning groups, while governments, on the other hand, release impressive sounding figures for their research spend. Most recently, the USA’s National Nanotechnology Initiative has estimated its 2006 spend on nano-safety research as $68 million, which sounds very impressive. However, according to Andrew Maynard, a leading nano-risk researcher based at the Woodrow Wilson Center in Washington DC, we shouldn’t take this figure at face value.

Maynard comments on the figure on the SafeNano blog, referring to an analysis recently done by him and described in a news release from the Woodrow Wilson Center’s Project on Emerging Nanotechnologies. It seems that this figure is obtained by adding up all sorts of basic nanotechnology research, some of which might have only tangential relevance to problems of risk. If one applies a tighter definition of research that is either highly relevant to nanotechnology risk – such as a direct toxicology study – or substantially relevant -such as a study of the fate in the body of medical nanoparticles – it seems that the numbers fall substantially. Only $13 million of the $68 million was highly relevant to nanotechnology risk, with this number increasing to $29 million if the substantially relevant category is included too. This compares unfavourably with European spending, which amounts to $24 million in the highly relevant category alone.

Of course, it isn’t the headline figure that matters; what’s important is whether the research is relevant to the actual and potential risks that are out there. The Project on Emerging Nanotechnologies has done a great service by compiling an international inventory of nanotechnology risk research which allows one to see clearly just what sort of risk research is being funded across the world. It’s clear from this that suggestions that nanotechnology is being commercialised with no risk research at all being done are wide of the mark; what requires further analysis is whether all the right research is being done.

Molecular scale electronics from graphene

The remarkable electronic properties of graphene – single, one-atom thick, sheets of graphite – are highlighted in a paper in this weeks Science magazine, which demonstrates field-effect transistors exploiting quantum dots as small as 10 nm carved out of graphene. The paper is by Manchester University’s Andre Geim, the original discoverer of graphene, together with Kostya Novoselov and other coworkers (only the abstract is available without subscription from the Science website, but the full paper is available from Geim’s website(PDF)).

A quantum dot is simply a nanoscale speck of a conducting or semiconducting material that is small enough that the electrons, behaving as quantum particles, behave in a different way because of the way in which they are confined. What makes graphene different and interesting is the unusual behaviour the electrons show in this material to start with – as explained in this earlier post, electrons in graphene behave as if they were mass-less, ultra-relativistic particles. For relatively large quantum dots (greater than 100 nm), the behaviour is similar to other quantum dot devices; the device behaves like a so-called single electron transistor, in which the conductance of the device shows distinct peaks with voltage, reflecting the fact that current is carried in whole numbers of electrons, a phenomenon called Coulomb blockade. It’s at sizes less than 100 nm that the behaviour becomes really interesting – on these size scales quantum confinement is becoming important, but rather than producing an ordered series of permitted energy states, as one would expect for normal electrons, they see behaviour characteristic of quantum chaos. Pushing the size down even further, the techniques being used give less control over the precise shape of the quantum dots that are made, and their behaviour becomes less predictable and less reproducible. Nonetheless, even down to sizes of a few nanometers, they see the clean switching behaviour that could make these useful electronic devices.

For more context, see this Commentary in Science (subscription required), and this BBC news story.

Graphene based quantum dots (A. Geim, Manchester University)
Left: Scanning electron micrograph of a single-electron transistor based on a graphene quantum dot. Right: Schematic of a hypothetical transistor based on a very small graphene quantum dot. A.K. Geim, University of Manchester, from Science 320 p324 (2008)

Watching an assembler at work

The only software-controlled molecular assembler we know about is the ribosome – the biological machine that reads the sequence of bases on a strand of messenger RNA, and, converting this genetic code into a sequence of amino acids, synthesises the protein molecule that corresponds to the gene whose information was transferred by the RNA. An article in this week’s Nature (abstract, subscription required for full paper, see also this editor’s summary) describes a remarkable experimental study of the way the RNA molecule is pulled through the ribosome as each step of its code is read and executed. This experimental tour-de-force of single molecule biophysics, whose first author is Jin-Der Wen, comes from the groups of Ignacio Tinoco and Carlos Bustamante at Berkeley.

The experiment starts by tethering a strand of RNA between two micron-size polystyrene beads. One bead is held firm on a micropipette, while the other bead is held in an optical trap – the point at which a highly focused laser beam has its maximum intensity. The central part of the RNA molecule is twisted into a single hairpin, and the ribosome binds to the RNA just to one side of this hairpin. As the ribosome reads the RNA molecule, it pulls the hairpin apart, and the resulting lengthening of the RNA strand is directly measured from the change in position of the anchoring bead in its optical trap. What’s seen is a series of steps – the ribosome moves about 2.7 nm in about a tenth of a second, then pauses for a couple of seconds before making another step.

This distance corresponds exactly to the size of the triplet of bases that represent a single character of the genetic code – the codon. What we are seeing, then, is the ribosome pausing on a codon to read it, before pulling the tape through to read the next character. What we don’t see in this experiment, though we know it’s happening, is the addition of a single amino acid to the growing protein chain during this read step. This takes place by means of the binding to RNA codon, within the ribosome, of a shorter strand of RNA – the transfer RNA – to which the amino acid is attached. What the experiment does make clear that the operation of this machine is by no means mechanical and regular. The times taken for the ribosome to move from the reading position for one codon to the next – the translocation times – are fairly tightly distributed around an average value of around 0.08 seconds, but the dwell times on each codon vary from a fraction of a second up to a few seconds. Occasionally the ribosome stops entirely for a few minutes.

This experiment is far from the final word on the way ribosomes operate. I can imagine, for example, that people are going to be making strenuous efforts to attach a probe directly to the ribosome, rather than, as was done here, inferring its motion from the location of the end of the RNA strand. But it’s fascinating to have such a direct probe of one of the most central operations of biology. And for those attempting the very ambitious task of creating a synthetic analogue of a ribosome, these insights will be invaluable.