The biofuels bust

The news that the UK is to slow the adoption of biofuels, and that the European Parliament has called for a reduction in the EU’s targets for biofuel adoption, is a good point to mark one of the most rapid turnarounds we’ve seen in science policy. Only two years ago, biofuels were seen by many as a benign way for developed countries to increase their energy security and reduce their greenhouse gas emissions without threatening their citizens’ driving habits. Now, we’re seeing the biofuel boom being blamed for soaring food prices, and the environmental benefits are increasingly in doubt. It’s rare to see the rationale for a proposed technological fix for a major societal problem fall apart quite so quickly, and there must surely be some lessons here for other areas of science and policy.

The UK’s volte-face was prompted by a government commissioned report led by the environmental scientist Ed Gallagher. The Gallagher Review is quite an impressive document, given the rapidity with which it has been put together. This issue is in many ways typical of the problem we’re increasingly seeing arising, in which difficult and uncertain science comes together with equally uncertain economics through the unpredictability of human and institutional responses in a rapidly changing environment.

The first issue is whether, looking at the whole process of growing crops for biofuels, including the energy inputs for agriculture and for the conversion process, one actually ends up with a lower output of greenhouse gases than one would using petrol or diesel. Even this most basic question is more difficult than it might seem, as illustrated by the way the report firmly but politely disagrees with a Nobel Laureate in atmospheric chemistry, Paul Crutzen, who last year argued that, if emissions of nitrogen oxides during agriculture were properly accounted for, biofuels actually produce more greenhouse gases than the fossil fuels they replace. Nonetheless, the report finds a wide range of achievable greenhouse gas savings; corn bioethanol, for example, at its best produces a saving of about 35%, but at its worst it actually produces a net increase in greenhouse gases of nearly 30%. Other types of biofuel are better; both Brazilian ethanol from sugar cane and biodiesel from palm oil can achieve savings of between 60% and 70%. But, and this is a big but, these figures assume these crops are grown on existing agricultural land. If new land needs to be taken into cultivation, there’s typically a large release of carbon. Taking into account the carbon cost of changing land use means that there’s a considerable pay-back time before any greenhouse gas savings arise at all. In the worst cases, this can amount to hundreds of years.

This raises the linked questions – how much land is available for growing biofuels, and how much can we expect that the competition from biofuel uses of food crops will lead to further increases in food prices? There seems to be a huge amount of uncertainty surrounding these issues. Certainly the situation will be eased if new technologies arise for the production of cellulosic ethanol, but these aren’t necessarily a panacea, particularly if they involve changes in land-use. The degree to which recent food price increases can be directly attributed to the growth in biofuels is controversial, but no-one can doubt that, in a world with historically low stocks of staple foodstuffs, any increase in demand will result in higher prices than would otherwise have occurred. The price of food is already indirectly coupled to the price of oil because modern intensive agriculture demands high energy inputs, but the extensive use of biofuels makes that coupling direct.

It’s easy to be wise in hindsight, but one might wonder how much of this could have been predicted. I wrote about biofuels here two years ago, and re-reading that entry – Driving on sunshine – it seems that some of the drawbacks were more easy to anticipate than others. What’s sobering about the whole episode, though, is that it does show how complicated things can get when science, politics and economics get closely coupled in situations needing urgent action in the face of major uncertainties.

Nanotechnology and the singularitarians

A belief in the power and imminence of the Drexlerian vision of radical nanotechnology is part of the belief-package of adherents of the view that an acceleration of technology, linked particularly with the development of a recursively self-improving, super-human, artificial intelligence, will shortly lead to a moment of ineffably rapid technological and societal change – the Singularity. So it’s not surprising that my article in the IEEE Spectrum special issue on the Singularity – “Rupturing the Nanotech Rapture” – has generated some reaction amongst the singularitarians. The longest response has come from Michael Anissimov, whose blog Accelerating Future offers an articulate statement of the singularitarian case. Here are my thoughts on some of the issues he raises.

One feature of his response is his dissociation from some of the stronger claims of his fellow singularitarians. For example, he responds to the suggestion that MNT will allow any material or artefact – “a Stradivarius or a steak” – to be made in abundance, by suggesting that no-one thinks this anymore, and this is a “red herring” that has arisen from “journalists inaccurately summarizing the ideas of scientists”. On the contrary, this claim has been at the heart of the rhetoric surrounding MNT from the earliest writings of Drexler, who wrote in “Engines of Creation” “Because assemblers will let us place atoms in almost any reasonable arrangement , they will let us build almost anything that the laws of nature allow to exist.” Elsewhere, Anissimov distances himself from Kurzweil, who he includes in a group of futurists who “justifiably attract ridicule”.

This raises the question of who speaks for the singularitarians. As an author writing here for a publication with a fairly large circulation, it seems to me obvious that the authors whose arguments I need to address first are those whose books themselves command the largest circulation, because that’s where the readers are mostly going to have got their ideas about the singularity from. So, the first thing I did when I got this assignment was to read Kurzweil’s bestseller “The Singularity is Near”; after all, it’s Kurzweil who is able to command articles in major newspapers and is about to release a film. More specific to MNT, Drexler’s “Engines of Creation” obviously has to be a major point of reference, together with more recent books like Josh Hall’s “Nanofuture”. For the technical details of MNT, Drexler’s “Nanosystems” is the key text. It may well be that Michael and his associates have more sophisticated ideas about MNT and the singularity, but while these ideas remain confined to discussions on singularitarian blogs and email lists, they aren’t realistically going to attract the attention that people like Kurzweil do, and its appropriate that the publicly prominent faces of singularitarianism should attract the efforts of those arguing against the notion.

A second theme of Michael’s response is the contention that the research that will lead to MNT is happening anyway. It’s certainly true that there are many exciting developments going on in nanotechnology laboratories around the world. What’s at issue, though, is what direction these developments are taking us. GIven the tendencies of singularitarians towards technological determinism, it’s a natural tendency to assume that all these exciting developments are all milestones on the way to a nano-assembler, and that progress towards the singularity can be measured by the weight of press releases flowing from the press offices of universities and research labs around the world. The crucial point, though, is that there’s no force driving technology towards MNT. Yes, technology is moving forward, but the road it’s taking is not the one anticipated by MNT proponents. It’s not clear to me that Michael has understood my central argument – it’s true that biology offers an existence proof for advanced nanotechnological devices of one kind or another – as Michael says, “Obviously, a huge number of biological entities, from molecule-sized to cell-sized, regularly traverse the body and perform a wide variety of essential functions, so we know such a thing is possible in principle.” But, this doesn’t allow us conclude that nanorobots built on the mechanical engineering principles of MNT will be possible, because the biological machines work on entirely different principles. The difficulties I outline for MNT that arise as a result of the different physics of the nanoscale are not difficulties for biological nanotechnology, because its very different operating principles exploit this different physics rather than trying to engineer round it.

What’s measured by all these press releases, then, is progress towards a whole variety of technological goals, many very different from the goals envisaged for MNT, and each of whose feasibility at the present time we simply don’t know. I’ve given my arguments as to why MNT actually looks less likely now than it did ten years ago, and Michael isn’t able to counter these arguments other than by saying that “Of course, all of these challenges were taken into account in the first serious study of the feasibility of nanoscale robotic systems, titled Nanosystems…. We’ll need to build nanomachines using nanomechanical principles, not naive reapplications of macroscale engineering principles.” But Nanosystems is all about applying macroscale engineering principles – right at the outset it states that “molecular manufacturing applies the principles of mechanical engineering to chemistry.” Instead of work directed towards MNT, we’re now seeing other goals being pursued – goals like quantum computing, DNA based nanomachines, a path from plastic electronics to ultracheap computing and molecular electronics, breakthroughs in nanomedicine, optical metamaterials. Far from being incremental updates, many of these research directions hadn’t even been conceived when Drexler wrote “Engines of Creation”, and, unlike the mechanical engineering paradigm, these all really do exploit the different and unfamiliar physics of the nanoscale. All these are being actively researched now, but not all of them will pan out and other entirely unforeseen technologies will be discovered and get people excited anew.

Ultimately, Michael’s arguments boil down to a concatenation of ever-hopeful “ifs” and “ands”. In answer to my suggestion that, if MNT like processes could only be got to work at low temperatures and ultra-high vacua, Michael says “If the machines used to maintain high vacuum and extreme refrigeration could be manufactured for the cost of raw materials, and energy can be obtained in great abundance from nano-manufactured, durable, self-cleaning solar panels, I am skeptical that this would be as substantial of a barrier as it is to similar high-requirement processes today.” I think there’s a misunderstanding of economics here. Things can only be manufactured for the cost of their raw materials if the capital cost of the manufacturing machinery is very small. But this capital cost itself mostly reflects the amortization of the research and development costs of developing the necessary plant and equipment. What we’ve learnt from the semiconductor industry is that as technology progresses these capital costs become larger and larger and more and more dominating for the economics of the industry. It’s difficult to see what can reverse this trend without invoking a deus ex machina. Ultimately, it’s just such an invocation that arguments for the singularity seem in the end to reduce to.

Discussion meeting on soft nanotechnology

A forthcoming conference in London will be discussing the “soft” approach to nanotechnology. The meeting – Faraday Discussion 143 – Soft Nanotechnology – is organised by the UK’s Royal Society of Chemistry, and follows a rather unusual format. Selected participants in the meeting submit a full research paper, which is peer reviewed and circulated, before the meeting, to all the attendees. The meeting itself concentrates on a detailed discussion of the papers, rather than a simple presentation of the results.

The organisers describe the scope of the meeting in these terms: “Soft nanotechnology aims to build on our knowledge of biological systems, which are the ultimate example of ‘soft machines’, by:

  • Understanding, predicting and utilising the rules of self-assembly from the molecular to the micron-scale
  • Learning how to deal with the supply of energy into dynamically self-assembling systems
  • Implementing self-assembly and ‘wet chemistry’ into electronic devices, actuators, fluidics, and other ‘soft machines’.
  • An impressive list of invited international speakers includes Takuzo Aida, from the University of Tokyo, Chris Dobson, from the University of Cambridge, Ben Feringa, from the University of Groningen, Olli Ikkala, from Helsinki University of Technology, Chengde Mao, from Purdue University, Stefan Matile, from the University of Geneva, and Klaus J Schulten, from the University of Illinois. The conference will be wrapped up by Harvard’s George Whitesides, and I’m hugely honoured to have been asked to give the opening talk.

    The meeting is not until this time next year, in London, but if you want to present a paper you need to get an abstract in by the 11 July. Faraday Discussions in the past have featured lively discussions, to say the least; it’s a format that’s tailor made for allowing controversies to be aired and strong positions to be taken.

    Right and wrong lessons from biology

    The most compelling argument for the possibility of a radical nanotechnology, with functional devices and machines operating at the nano-level, is the existence of cell biology. But one can take different lessons from this. Drexler argued that we should expect to be able to do much better than cell biology if we applied the lessons of macroscale engineering, using mechanical engineering paradigms and hard materials. My argument, though, is that this fails to take into account the different physics of the nanoscale, and that evolution has optimised biology’s “soft machines” for this environment. This essay, first published in the journal Nature Nanotechnology (subscription required, vol 1, pp 85 – 86 (2006)), reflects on this issue.

    Nanotechnology hasn’t yet acquired a strong disciplinary identity, and as a result it is claimed by many classical disciplines. “Nanotechnology is just chemistry”, one sometimes hears, while physicists like to think that only they have the tools to understand the strange and counterintuitive behaviour of matter at the nanoscale. But biologists have perhaps the most reason to be smug – in the words of MIT’s Tom Knight “biology is the nanotechnology that works”.

    The sophisticated and intricate machinery of cell biology certainly gives us a compelling existence proof that complex machines on the nanoscale are possible. But, having accepted that biology proves that one form of nanotechnology is possible, what further lessons should be learned? There are two extreme positions, and presumably a truth that lies somewhere in between.

    The engineers’ view, if I can put it that way, is that nature shows what can be achieved with random design methods and a palette of unsuitable materials allocated by the accidents of history. If you take this point of view, it seems obvious that it should be fairly straightforward to make nanoscale machines whose performance vastly exceeds that of biology, by making rational choices of materials, rather than making do with what the accidents of evolution have provided, and by using the design principles we’ve learnt in macroscopic engineering.

    The opposite view stresses that evolution is an extremely effective way of searching parameter space, and that in consequence is that we should assume that biological design solutions are likely to be close to optimal for the environment for which they’ve evolved. Where these design solutions seem odd from our point of view, their unfamiliarity is to be ascribed to the different ways in which physics works at the nanoscale. At its most extreme, this view regards biological nanotechnology, not just as the existence proof for nanotechnology, but as an upper limit on its capabilities.

    So what, then, are the right lessons for nanotechnology to learn from biology? The design principles that biology uses most effectively are those that exploit the special features of physics at the nanoscale in an environment of liquid water. These include some highly effective uses of self-assembly, using the hydrophobic interaction, and the principle of macromolecular shape change that underlies allostery, used for both for mechanical transduction and for sensing and computing. Self-assembly, of course, is well known both in the laboratory and in industrial processes like soap-making, but synthetic examples remain very crude compared to the intricacy of protein folding. For industrial applications, biological nanotechnology offers inspiration in the area of green chemistry – promising environmentally benign processing routes to make complex, nanostructured materials based on water as a solvent and using low operating temperatures. The use of templating strategies and precursor routes widens the scope of these approaches to include final products which are insoluble in water.

    But even the most enthusiastic proponents of the biological approach to nanotechnology must concede that there are branches of nanoscale engineering that biology does not seem to exploit very fully. There are few examples of the use of coherent electron transport over distances greater than a few nanometers. Some transmembrane processes, particularly those involved in photosynthesis, do exploit electron transfer down finely engineered cascades of molecules. But until the recent discovery of electron conduction in bacterial pili, longer ranged electrical effects in biology seem to be dominated by ionic rather than electronic transport. Speculations that coherent quantum states in microtubules underlie consciousness are not mainstream, to say the least, so a physicist who insists on the central role of quantum effects in nanotechnology finds biology somewhat barren.

    It’s clear that there is more than one way to apply the lessons of biology to nanotechnology. The most direct route is that of bionanotechnology, in which the components of living systems are removed from their biological context and put to work in hybrid environments. Many examples of this approach (which NYU’s Ned Seeman has memorably called biokleptic nanotechnology) are now in the literature, using biological nanodevices such as molecular motors or photosynthetic complexes. In truth, the newly emerging field of synthetic biology, in which functionality is added back in a modular way to a stripped down host organism, is applying this philosophy at the level of systems rather than devices.

    This kind of synthetic biology is informed by what’s essentially an engineering sensibility – it is sufficient to get the system to work in a predictable and controllable way. Some physicists, though, might want to go further, taking inspiration from Richard Feynman’s slogan “What I cannot create I do not understand”. Will it be possible to have a biomimetic nanotechnology, in which the design philosophy of cell biology is applied to the creation of entirely synthetic components? Such an approach will be formidably difficult, requiring substantial advances both in the synthetic chemistry needed to create macromolecules with precisely specified architectures, and in the theory that will allow one to design molecular architectures that will yield the structure and function one needs. But it may have advantages, particularly in broadening the range of environmental conditions in which nanosystems can operate.

    The right lessons for nanotechnology to learn from biology might not always be the obvious ones, but there’s no doubting their importance. Can the traffic ever go the other way – will there be lessons for biology to learn from nanotechnology? It seems inevitable that the enterprise of doing engineering with nanoscale biological components must lead to a deeper understanding of molecular biophysics. I wonder, though, whether there might not be some deeper consequences. What separates the two extreme positions on the relevance of biology to nanotechnology is a difference in opinion on the issue of the degree to which our biology is optimal, and whether there could be other, fundamentally different kinds of biology, possibly optimised for a different set of environmental parameters. It may well be a vain expectation to imagine that a wholly synthetic nanotechnology could ever match the performance of cell biology, but even considering the possibility represents a valuable broadening of our horizons.

    Reactions to “Rupturing the Nanotech Rapture”

    It’s a couple of weeks since my article in the current edition of IEEE Spectrum magazine (the Singularity Special) – “Rupturing the Nanotech Rapture” – appeared, and it’s generated a certain amount of discussion on the nanotech blogs. Dexter Johnson, on Tech Talk (IEEE Spectrum’s own blog) observes that “In all it’s a deftly diplomatic piece, at once dispelling some of the myths surrounding the timeline for molecular nanotechnology contributing to the Singularity while both complementing and urging on the early pioneers of its concept.” I’m very happy with this characterisation.

    On Nanodot, the blog of the Foresight Institute, the piece prompts the question: “Which way(s) to advanced nanotechnology?” The answer is diplomatic: “Certainly the “soft machines” approach to nanotechnology holds great promise for the near term, while the diamondoid mechanosynthesis approach is only in the very early stages of computer simulation.” This certainly captures the relatively slow progress to date of diamondoid mechanosynthesis, and attracts the scorn of nanobusinessman Tim Harper, who writes on TNTlog “Perhaps more roadkill than tortoise to nanoscience’s hare is diamondoid mechanosynthesis, beloved of the Drexlerians, which doesn’t seem to have made any progress whatsoever, and increasingly resembles a cross between a south sea cargo cult and Waiting for Godot.”

    Over on the Center for Responsible Nanotechnology, the Nanodot piece prompts the question “Which way from here?” (though CRN doesn’t actually mention me or the Spectrum piece directly). The question isn’t answered – “CRN also remains agnostic about whether a top-down or bottom-up angle or a soft/wet or hard/dry approach will be more successful.” This doesn’t seem entirely consistent with their previous published positions but there we are.

    The longest response comes from Michael Anissimov’s blog, Accelerating Future. This runs to several pages, and deserves a considered response, which is coming soon.

    Synthetic biology – summing up the debate so far

    The UK’s research council for biological sciences, the BBSRC, has published a nice overview of the potential ethical and social dimensions to the development of synthetic biology. The report – Synthetic biology: social and ethical challenges (737 KB PDF) – is by Andrew Balmer & Paul Martin at the University of Nottingham’s Institute for Science and Society.

    The different and contested definitions and visions that people have for synthetic biology are identified at the outset; the authors distinguish between four rather different conceptions of synthetic biology. There’s the Venter approach, consisting of taking a stripped-down organism with a minimal genome, and building desired functions into that. The identification of modular components and the genetic engineering of whole pathways forms a second, but related approach. Both of these visions of synthetic biology still rely on the re-engineering of existing DNA based life; a more ambitious, but much less completely realised, program for synthetic biology, attempts to make wholly artificial cells from non-biological molecules. A fourth strand, which seems less far-reaching in its ambitions, attempts to make novel biomolecules by mimicking the post-transcriptional modification of proteins that is such a source of variety in biology.

    What broader issues are likely to arise from this enterprise? The report identifies five areas to worry about. There’s the potential problems and dangers of the uncontrolled release of synthetic organisms into the biosphere; the worry of these techniques being mis-used for the creation of new pathogens for use in bioterrorism, the potential for the creation of monopolies through an unduly restrictive patenting regime, and implications for trade and global justice. Most far-reaching of all, of course, are the philosophical and cultural implications of creating artificial life, with its connotations of transgressing the “natural order”, and the problems of defining the meaning and significance of life itself.

    The recommended prescriptions fall into a well-rehearsed pattern – the need for early consideration of governance and regulation, and the desirability of carrying the public along with early public engagement and resistance to the temptation to overhype the potential applications of the technology. As ever, dialogue between scientists and civil society groups, ethicists and social scientists is recommended, a dialogue which, the authors think, will only be credible if there is a real possibility that some lines of research would be abandoned if they were considered too ethically problematical.

    On expertise

    Whose advice should we trust when we need to make judgements about difficult political questions with a technical component? Science sociologists Harry Collins and Robert Evans, of Cardiff University, believe that this question of expertise is the most important issue facing science studies at the moment. I review their book on the subject, Rethinking Expertise, in an article – Spot the physicist – in this month’s Physics World.

    Coming soon (or not)

    Are we imminently facing The Singularity? This is the hypothesised moment of technological transcendence, in the concept introduced by mathematician and science fiction writer Vernor Vinge, when accelerating technological change leads to a recursively self-improving artificial intelligence of superhuman capabilities, with literally unknowable and ineffable consequences. The most vocal proponent of this eschatology is Ray Kurzweil, whose promotion of the idea takes to the big screen this year with the forthcoming release of the film The Singularity is Near.

    Kurzweil describes the run-up to The Singularity: “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it … Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity … and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.” Where will we go from here? To The Singularity – “We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it”. This will take place, according to Kurzweil, in the year 2045.

    The film is to be a fast-paced documentary, but to leaven the interviews with singularitarian thinkers like Aubrey de Gray, Eric Drexler and Eliezer Yudkowsky, there’ll be a story-line to place the technology in social context. This follows the touching struggle of Kurzweil’s female, electronic alter ego, Ramona, to achieve full person-hood, foiling an attack of self-replicating nanobots on the way, before finally being coached to pass a Turing test by self-help guru Tony Robbins.

    For those who might prefer a less dramatic discussion of The Singularity, IEEE Spectrum, is running a special report on the subject in its June edition. According to a press release (via Nanowerk), “the editors invited articles from half a dozen people who have worked on and written about subjects central to the singularity idea in all its loopy glory. They encompass not just hardware and wetware but also economics, consciousness, robotics, nanotechnology, and philosophy.” One of those writers is me; my article, on nanotechnology, ended up with the title “Rupturing the Nanotech Rapture”.

    We’ll need to wait a week or two to read the articles – I’m particularly looking forward to reading Christof Koch’s article on machine consciousness, and Alfred Nordmann’s argument against “technological fabulism” and the “baseless extrapolations” it rests on. Of course, even before reading the articles, transhumanist writer Michael Anissimov fears the worst.

    Update. The IEEE Spectrum Singularity Special is now online, including my article, Rupturing the Nanotech Rapture. (Thanks to Steven for pointing this out in the comments).

    Asbestos-like toxicity of some carbon nanotubes

    It has become commonplace amongst critics of nanotechnology to compare carbon nanotubes to asbestos, on the basis that they are both biopersistent, inorganic fibres with a high aspect ratio. Asbestos is linked to a number of diseases, most notably the incurable cancer mesothelioma, of which there are currently 2000 new cases a year in the UK. A paper published in Nature Nanotechnology today, from Ken Donaldson’s group at the University of Edinburgh, provides the best evidence to date that some carbon nanotubes – specifically, multi-wall nanotubes longer than 20 µm or so – do lead to the same pathogenic effects in the mesothelium as asbestos fibres.

    The basis of toxicity of asbestos and other fibrous materials is now reasonably well understood; their toxicity is based on the physical nature of the materials, rather than their chemical composition. In particular, fibres are expected to be toxic if they are long – longer than about 20 µm – and rigid. The mechanism of this pathogenicity is believed to be related to frustrated phagocytosis. Phagocytes are the cells whose job it is to engulf and destroy intruders – when they detect a foreign body like a fibre, they attempt to engulf it, but are unable to complete this process if the fibre is too long and rigid. Instead they release a burst of toxic products, which have no effect on the fibre but instead cause damage to the surrounding tissues. There is every reason to expect this mechanism to be active for nanotubes which are sufficiently long and rigid.

    Donaldson’s group tested the hypothesis that long carbon nanotubes would have a similar effect to asbestos by injecting nanotubes into the peritoneal cavity of mice to expose the mesothelium directly to nanotubes, and then directly monitor the response. This is a proven assay for the initial toxic effects of asbestos that subsequently lead to the cancer mesothelioma.

    Four multiwall nanotube samples were studied. Two of these samples had long fibres – one was a commercial sample, from Mitsui, and another was produced in a UK academic laboratory. The other two samples had short, tangled fibres, and were commercial materials from NanoLab Inc, USA. The nanotubes were compared to two controls of long and short fibre amosite asbestos and one of non-fibrous, nanoparticulate carbon black. The two nanotube samples containing a significant fraction of long (>20 µm) nanotubes, together with the long-fibre amosite asbestos, produced a characteristic pathological response of inflammation, the production of foreign body giant cells, and the development of granulomas, a characteristic lesion. The nanotubes with short fibres, like the short fibre asbestos sample and the carbon black, produced little or no pathogenic effect. A number of other controls provide good evidence that it is indeed the physical form of the nanotubes rather than any contaminants that leads to the pathogenic effect.

    The key finding, then, is that not all carbon nanotubes are equal when it comes to their toxicity. Long nanotubes produce an asbestos-like response, while short nanotubes, and particulate graphene-like materials don’t produce this response. The experiments don’t directly demonstrate the development of the cancer mesothelioma, but it would be reasonable to suppose this would be the eventual consequence of the pathogenic changes observed.

    The experiments do seem to rule out a role for other possible contributing factors (presence of metallic catalyst residues), but they do not address whether other mechanisms of toxicity might be important for short nanotubes.

    Most importantly, the experiments do not say anything about issues of dose and exposure. In the experiments, the nanotubes were directly injected into peritoneal cavity; to establish whether environmental or workplace exposure to nanotubes present a danger one needs to know how likely it is that realistic exposures of inhaled nanotubes would lead to enough nanotubes crossing the lungs through to the mesothelium lead to toxic effects. This is the most urgent question now waiting for further research.

    It isn’t clear what proportion of the carbon nanotubes now being produced on industrial, or at least pilot plant, scale, would have the characteristics – particularly in their length – that would lead to the risk of these toxic effects. However, those nanotubes that are already in the market-place are mostly in the form of advanced composites, in which the nanotubes are tightly bound in a resin matrix, so it seems unlikely that these will pose an immediate danger. We need, with some urgency, research into what might happen to the nanotubes in such products over their whole lifecycle, including after disposal.

    Lichfield lecture

    Tomorrow I’m giving a public lecture in the Garrick Theatre, Lichfield, under the auspices of the Lichfield Science and Engineering Society. Non-members are welcome.

    Lichfield is a small city in the English Midlands; it’s of ancient foundation, but in recent times has been eclipsed by the neighbouring industrial centres of Birmingham and the Black Country. Nonetheless, it should occupy at least an interesting footnote in the history of science and technology. It was the home of Erasmus Darwin, who deserves to be known for more than simply being the grandfather of Charles Darwin. Erasmus Darwin (1731 – 1802) was a doctor and polymath; his own original contributions to science were relatively slight, though his views on evolution prefigured in some ways those of his grandson. But he was at the centre of a remarkable circle of scientists, technologists and industrialists, the Lunar Society, who between them laid many of the foundations of modernity. Their members included the chemist, Joseph Priestly, discoverer of oxygen, Josiah Wedgwood, whose ceramics factory developed many technical innovations, and Matthew Boulton and James Watt, who between them take much of the credit for the widespread industrial use of efficient steam power. In attitude they were non-conformist in religion – Priestley was a devout Unitarian, who combined thoroughgoing materialism with a conviction that the millennium was not far away, but Erasmus Darwin verged close to atheism. Their politics was radical – dangerously so, at a time when the example of the American and French revolutions led to a climate of fear and repression in England.

    The painting depicts another travelling science lecturer demonstrating the new technology of the air pump to a society audience in the English Midlands. The painter, Joseph Wright, from Derby, was a friend of Erasmus Darwin, and the full moon visible through the window is probably a reference to the Lunar Society, many of whose members Wright was well acquainted with. Aside from its technical brilliance the painting captures both the conviction of some in those enlightenment times that public experimental demonstrations would provide a basis of agreed truth at a time of political and religious turbulence, and, perhaps, a suggestion that this knowledge was after all not without moral ambiguity.

    An experiment on a bird in an air pump
    An experiment on a bird in an air pump, by Joseph Wright, 1768. The original is in the National Gallery.