Nanoethics conference at Avignon

I’m en-route to the South of France, on my way to Avignon, where, under the auspices of a collaboration between the University of Paris and Stanford University, there’s a conference on the “Ethical and Societal Implications of the Nano-Bio-Info-Cogno Convergence”. The aim of the conference is to “explore issues emerging in the application of nanotechnology, biotechnology, information technology, and cognitive science to the spheres of social, economic, and private life, as well as a contribution of ethical concerns to shaping the technological development.” One of the issues that has clearly captured the imagination of a number of the contributors from a more philosophical point of view is the idea of self-assembly, and particularly the implications this has for the degree of control, or otherwise, that we, as technologists, will have over our productions. The notion of a “soft machine” appeals to some observers’ sense of paradox, and opens up a discussion the connections between the Cartesian idea of a machine, our changing notions of how biological organisms work, and competing ideas of how best to do engineering on the nanoscale. There’s a session devoted to self-assembly, introduced by the philosopher Bernadette Bensaude-Vincent; among the people responding will be me and the Harvard chemist George Whitesides.

The commenters on the last item will be pleased to hear that, rather than flying to Avignon, I’m travelling in comfort on France’s splendidly fast (and, ultimately, nuclear powered) trains.

Driving on sunshine

Can the fossil fuels we use in internal combustion engines be practicably replaced by fuels derived from plant materials – biofuels? This question has, in these times of high oil prices and climate change worries, risen quickly up the agenda. Plants use the sun’s energy to convert carbon dioxide into chemically stored energy in the form of sugar, starch, vegetable oil or cellulose, so if one can economically convert these molecules into convenient fuels like ethanol, one has a route for the sustainable production of fuels for transportation. The sense of excitement and timeliness has even reached academia; my friends in Cambridge University and Imperial College are, as I write, frantically finalising their rival pitches to the oil giant BP, which is planning to spend $500 million on biofuels research over the next 10 years. Today’s issue of Nature has some helpful features (here, this claims to be free access but it doesn’t work for me without a subscription) overviewing the pros and cons.

The advantages of biofuels are obvious. They exploit the energy of the sun, the only renewable and carbon-neutral energy source available, in principle, in sufficient quantities to power our energy-intensive way of life on a worldwide basis. Unlike alternative methods of harnessing the sun’s energy, such as using photovoltaics to generate electricity or to make hydrogen, biofuels are completely compatible with our current transportation infrastructure. Cars and trucks will run on them with little modification, and existing networks of tankers, storage facilities and petrol stations can be used unaltered. It’s easy to see their attractions to those oil companies which, like BP and Shell, have seen that they are going to have to change their ways if they are going to stay in business.

Up to now, I’ve been somewhat sceptical. Plants are, by the standards of photovoltaic cells, very inefficient at converting sunlight into energy; they require inputs of water and fertilizer, and need to be converted into usable biofuels by energy intensive processes. The world has plenty of land, but the fraction of it available for agriculture is not large, and while this is probably sufficient to provide enough food for the world’s population the margin is not very comfortable, and is likely to get less so as climate change intensifies. One of the highest profile examples of large scale biofuel production is provided by the US program to make ethanol from corn, which is only kept afloat by huge subsidies and high protective tariff barriers. In energetic terms, it isn’t even completely clear that the corn-alcohol process produces more energy than it consumes (even advocates of the program claim only that it produces a two-fold return on energy input).

The Nature article does make clear, though, that there is a much more positive example of a biofuel program, in ethanol produced from Brazilian sugar-cane. Estimates are that it produces an eightfold return on the energy input, and it’s clear that this product, at around 27 cents a litre, is economic at current oil prices. The environmental costs of farming the stuff seem, if not negligible, less extreme than, for example, the destruction of rain-forest for palm oil plantations to produce biodiesel. The problem, as always, is scaling-up, finding enough suitable land to make a dent on the world’s huge thirst for transport fuels. Brazil is a big country, but even optimists only predict a doubling of output in the near future, which would still leave it accounting for less than one percent of the world’s demand for petrol.

Can there be a technical fix for these problems? This, of course, is the hope behind BP’s investment in research. One key advance would be to find more economical ways of breaking down the tough molecules that make up the woody matter of many plants, cellulose and lignin, into their component sugars, and then into alcohol. This brings the prospect of being able to use, not only agricultural waste like corn husks and wheat straw, but new crops like switch-grass and willow. There seems to be a choice of two methods here – using the same technology that Germany developed in the 1930’s and 40’s to convert coal into oil, using high temperature and special catalysts, or developing new enzymes based on the ones that fungi that live on tree stumps use. The former is expensive and as yet unproven on large scales.

What has all this got to do with nanotechnology? It is very easy to get excited by the prospect of a nano-enabled hydrogen economy powered by cheap, large area unconventional photovotaics. But we mustn’t forget that our techno-systems have a huge amount of inertia built into them. According to Vaclav Smil, there are more internal combustion engines than people in the USA, so potential solutions to our energy problems which promise less disruption to existing ways of doing things will be more attractive to many people than more technologically sophisticated but disruptive rival approaches.

Against nanoethics

I spent a day the week before last in the decaying splendour of a small castle outside Edinburgh, in the first meeting of a working group considering the ethics of human enhancement. This is part of a European project on the ethics of nanotechnology and related technologies – Nanobioraise. It was a particular pleasure to meet Alfred Nordmann, of the Technical University of Darmstadt – a philosopher and historian of science who has written some thought provoking things about nanotechnology and the debates surrounding it.

Nordmann’s somewhat surprising opening gambit was to say that he wasn’t really in favour of studying the ethics of human enhancement at all. To be more precise, he was very suspicious of efforts to spend a lot of time thinking about the ethics of putative long-term developments in science and technology, such as the transcendence of human limitations by human enhancement technologies, or an age of global abundance brought about by molecular nanotechnology. Among the reasons for his suspicion is a simple consideration of the opportunity cost of worrying about something that may never happen – “ethical concern is a scarce resource and must not be squandered on incredible futures, especially where on-going developments demand our attention.” But Nordmann also identifies some more fundamental problems with this way of thinking.

He identifies the central rhetorical trick of speculative ethics as being an elision between “if” and “then”: we start out identifying some futuristic possibility along the lines of “if MNT is possible “, then we identify some ethical consequence from it “then we need to prepare for an age of global abundance, and adjust our economies accordingly”, which we take as a mandate for action now, foreshortening the conditional. In this way, the demand for early ethical consideration lends credence to possible futures whose likelihood hasn’t yet been tested rigorously. This gives a false impression of inevitability, which shuts off the possibility that we can steer or choose the path that technology takes, and it distracts us from more pressing issues. It’s also notable that some of those who are most prone to this form of argument are those with a strong intellectual or emotional stake in the outcome in question.

His argument is partly developed in unpublished article “Ignorance at the Heart of Science? Incredible Narratives on Brain-Machine Interfaces”, which is well worth reading. It closes with a set of recommendations, referring back to an earlier EU report coordinated by Nordman, Converging Technologies – Shaping the Future of European Societies, which recommends that:

  • “science policy attends also to the limits of technical feasibility, suggesting for example that one should scientifically scrutinize the all too naive assumptions, if not (citing Dan Sarewitz) “conceptual cluenessness” about thought and cognition that underwrites the US-report on NBIC convergence.
  • Along the same lines, a committee of historians and statisticians should produce a critical assessment of Ray Kurzweil’s thesis about exponential growth.
  • Also, as Jürgen Altmann has urged, we need an Academy report about the Drexlerian vision of nanotechnology – is molecular manufacturing a real possibility or not?
  • Finally and most generally, we need scientists and engineers who have the courage to publically distinguish between what is physically possible and what is technically feasible.
  • As a citizen, I am overtaxed if I am to believe and even to prepare for the fact that humans will soon engineer everything that does not contradict outright a few laws of nature.”

    In short, Nordmann believes that nanoethics needs to be done more ethically.

    It’s all about metamaterials

    A couple of journalists have recently asked me some questions about the EPSRC Ideas Factory on software control of matter that I am directing in January. The obvious question is whether software control of matter – which was defined as “a device or scheme that can arrange atoms or molecules according to an arbitrary, user-defined blueprint” – will be possible. I don’t know the answer to this – in some very limited sense (for example, the self-assembly of nanostructures based on DNA molecules with specified sequences) it is possible now, but whether these very tentative steps can be fully generalised is not yet clear (and if it was clear, then there would be no point in having the Ideas Factory). More interesting, perhaps is the question of what one would do with such a technology if one had it. Would it lead to, for example, the full MNT vision of Drexler, with personal nanofactories based on the principles of mechanical engineering executed with truly atomic precision?

    I don’t think so. I’ve written before of the difficulties that this project would face, and I don’t want to repeat that argument here. Instead, I want to argue that this mechanically focused vision of nanotechnology actually misses the biggest opportunity that this level of control over matter would offer – the possibility of precisely controlling the interactions between electrons and light within matter. The key idea here is that of the “metamaterial”, but the potential goes much further than simply designing materials: instead, the prize is the complete erosion of the distinction we have now between a “material” and a “device”.

    A “metamaterial” is the name given to a nanoscale arrangement of atoms that gives rise to new electronic,magnetic or optical properties that would not be obtainable in a single, homogenous material. It’s been known for some time, for example, that structures of alternating layers of different semiconductors can behave, as far as an electron is concerned, as a new material with entirely new semiconducting properties. The confinement of electrons in “quantum dots” – nanoscale particles of semiconductors – profoundly changes the quantum states allowed to an electron, and clever combinations of quantum dots and layered structures yield novel lasers now, and the promise of quantum information processing devices in the future. For light, the natural gemstone opal – formed by the self-assembly of spherical particles in ordered arrays – offers a prototype for metamaterials that interact with light in interesting and useful ways. This field has been recently energised by the theoretical work of John Pendry, , at Imperial College, who has demonstrated that in principle arrays of patterned dielectrics and conductors can behave as materials with a negative refractive index.

    This notion of optical metamaterials has achieved media notoriety as a route to making “invisibility cloaks” (see this review in Science for a more sober assessment). But the importance of these materials is much more general than that – in principle, if one can arrange the components of the metamaterial with nanoscale precision to some pattern that one calculates, one can guide light to go pretty much anywhere. If you combine this with the ability from semiconductor nanotechnology to manipulate electronic states, and from magnetic nanotechnology to manipulate electron spin, one has the potential for an integrated information technology of huge power. This will probably use not just the charge of the electron, as is done now, but its spin (spintronics) and/or its quantum state (quantum computing). There are, of course, some big ifs here, and I’m far from being confident that the required degree of generality, precision and control is possible. But I am sure that if something like a “matter compiler” is possible, it is manipulating photons and electrons, rather than carrying out fundamentally mechanical operations, that its products will be used for.

    Five challenges for nano-safety

    This week’s Nature has a Commentary piece (editor’s summary here, subscription required for full article) from the great and good of nanoparticle toxicology, outlining what they believe needs to be done, in terms of research, to ensure that nanotechnology is developed safely. As they say, “fears over the possible dangers of some nanotechnologies may be exaggerated, but they are not necessarily unfounded,” and without targeted and strategic risk research public confidence could be lost and innovation held up through fear of litigation.

    Their list of challenges is intended to form a framework for research over the next fifteen years; the wishlist is as follows:

  • Develop instruments to assess exposure to engineered nanomaterials in air and water, within the next 3–10 years.
  • Develop and validate methods to evaluate the toxicity of engineered nanomaterials, within the next 5–15 years.
  • Develop models for predicting the potential impact of engineered nanomaterials on the environment and human health, within the next 10 years.
  • Develop robust systems for evaluating the health and environmental impact of engineered nanomaterials over their entire life, within the next 5 years.
  • Develop strategic programmes that enable relevant risk-focused research, within the next 12 months.
  • Some might think it slightly odd that what amounts to a research proposal is being published in Nature. They give a positive view for stressing this program now. “Nanotechnology comes at an opportune time in the history of risk research. We have cautionary examples from genetically modified organisms and asbestos industries that motivate a real interest, from all stakeholders, to prevent, manage and reduce risk proactively.” Some indication of the potential downside of failing to be seen to move on this is seen in the recent results of a citizen’s jury on nanotechnology in Germany, reported today here (my thanks to Niels Boeing for bringing this to my attention). These findings seem notably more sceptical than the findings of similar processes in the UK.

    Biological computing on the radio

    I’m doing a live interview for the BBC Radio 4 science program The Material World in a couple of hours, at 4.30 pm UK time. The subject of the segment is biocomputing, and the other guest is the computer scientist and author Martyn Amos, whose blog you can read here, who has just published a nice book on the subject, Genesis Machines. You can listen to the broadcast over the internet, either live or up to a week from now, here.

    I’m also doing a Café Scientifique in Mumbai and Kolkata tomorrow, by video link, sponsored by the British Council.

    On nanotechnology and biology

    The second issue of Nature Nanotechnology is now available on-line (see here for my comments on the first issue). I think this issue is also free to view, but from next month a subscription will be required.

    Among the articles is an overview of nanoelectronics, based on a report from a recent conference, and a nice letter from a Belgian group describing the placement and reaction of individual macromolecules at surfaces using an AFM . The regular opinion column this month is contributed by me, and concerns one of my favourite themes: Is it possible to use modern science and engineering techniques to improve on nature, or has evolution already found the best solutions?

    Silicon and steel

    Two of the most important materials underpinning our industrial society are silicon and steel. Without silicon, the material from which microprocessors and memory chips are made, there would be no cheap computers, and telecommunications would be hugely less powerful and more expensive. Steel is at the heart of most building and civil engineering, making possible both cars and trucks and the roads they run on. So I was struck, while reading Vaclav Smil’s latest book, Transforming the Twentieth Century (about which I may write more later) by some contrasting statistics for the two materials.

    In the year 2000, around 846 million tonnes of steel was produced in the world, dwarfing the 20,000 tonne production of pure silicon. In terms of value, the comparison is a little closer – at around $600 a tonne, the annual production of steel was worth $500 billion, compared to the $1 billion value of silicon. Smil quotes a couple of other statistical nuggets, which may have some valuable lessons for us when we’re considering the possible economic impacts of nanotechnology.

    Steel, of course, has been around a long time as a material, but it’s easy to overlook how significant technological progress in steel-making has been. In 1920, it took the equivalent of 3 hours of labour to make 1 tonne of steel, but by 1999, this figure had fallen to about 11 seconds – a one thousand-fold increase in labour productivity. When people suggest that advanced nanotechnologies may cause social dislocation, by throwing workers in manufacturing and primary industries out of work, they’re fighting yesterday’s battle – this change has already happened.

    As for silicon, what’s remarkable about it is how costly it is given the fact that it’s made from sand. One can trace the addition of value through the production chain. Pure quartz costs around 1.7 cents a kilogram; after reduction to metalurgical grade silicon the value has risen to $1.10 a kilo. This is transformed into trichlorosilane, at $3 a kilo, and then after many purification processes one has pure polycrystalline silicon at around $50 a kilo. Single crystal silicon is then grown from this, leading to monocrystalline silicon rod worth more than $500 a kilo, which is then cut up into wafers. One of the predictions one sometimes hears about advanced nanotechnology is that it will be particularly economically disruptive, because it will allow anything to be made from abundant and cheap elements like carbon. But this example shows the extent to which the value of products doesn’t necessarily reflect the cost of the raw ingredients at all. In fact, in cases like this, involving complicated transformations carried out with high-tech equipment, it’s the capital cost of the plant that is most important in determining the cost of the product.

    Nature Nanotechnology

    I’ve been meaning to write for a while about the new journal from the Nature stable – Nature Nanotechnology (there’s complete free web access to this first edition). I’ve written before about the importance of scientific journals in helping relatively unformed scientific fields to crystallise, and the fact that this journal comes with the imprint of the very significant “Nature” brand means that the editorial policy of this new journal will have a big impact on the way the field unfolds over the next few years.

    Nature is, of course, one of the two rivals for the position as the most important and influential science publication in the world. Its US rival is Science. While Science is published by the non-profit American Association for the Advancement of Science, Nature, for all its long history, is a ruthlessly commercial operation, run by the British publishing company Macmillan. As such, it has been recently expanding its franchise to include a number of single subject journals, starting with biological titles like Nature Cell Biology, moving into the physical sciences with Nature Materials and Nature Physics, and now adding Nature Nanotechnology. Given the fact that just about everybody is predicting the end of printed scientific journals in the face of web-based preprint servers and open access models, how, one might ask, do they expect to make money out of this? The answer is an interesting one, in that it is to emphasise some old-fashioned publishing values, like the importance of a strong editorial hand, the value of selectivity and the role of design and variety. These journals are nice physical objects, printed on paper of good enough quality to read in the bath, and they have a thick front section, with general interest articles and short reviews, in addition to the highly selective selection of research papers at the back of the journal. What the subscriber pays for (and their marketing is heavily aimed at individual subscribers rather than research libraries) is the judgement of the editors in selecting the handful of outstanding papers in their field each month. It seems that the formula has, in the past, been successful, at least to the extent that the Nature journals have consistently climbed to the top of their subject league tables in the impact of the papers they publish.

    So how is Nature Nanotechnology going about defining its field? This is an interesting question, in that at first sight there looks to be considerable overlap with existing Nature group journals. Nature Materials, in particular, has already emerged as a leading journal in areas like nanostructured materials and polymer electronics, which are often included in wider definitions of nanotechnology. It’s perhaps too early to be making strong judgements about editorial policies yet, but the first issue seems to have a strong emphasis on truly nanoscale devices, with a review article on molecular machines, and the lead article describing a single nanotube based SQUID (superconducting quantum interference device). The front material makes a clear statement about the importance of wider societal and environmental issues, with an article from Chris Toumey about the importance of public engagement, and a commentary from Vicki Stone and Ken Donaldson about the relationship between nanoparticle toxicity and oxidative stress.

    I should declare an interest, in that I have signed up to write a regular column for Nature Nanotechnology, with my first piece to appear in the November edition. The editor is clearly conscious enough of the importance of new media to give me a contract explicitly stating that my columns shouldn’t also appear on my blog.

    The Royal Society’s verdict on the UK government’s nanotech performance

    The UK’s science and engineering academies – the Royal Society and the Royal Academy of Engineering – were widely praised for their 2004 report on nanotechnology – Nanoscience and nanotechnologies: opportunities and uncertainties, which was commissioned by the UK government. So it’s interesting to see, two years on, how they think the government is doing implementing their suggestions. The answer is given in a surprisingly forthright document, published a couple of days ago, which is their formal submission to the review of UK nanotechnology policy by the Council of Science and Technology. The press release that accompanies the submission makes their position fairly clear. Ann Dowling, the chair of the 2004 working group, is quoted as saying “The UK Government was recognised internationally as having taken the lead in encouraging the responsible development of nanotechnologies when it commissioned our 2004 report. So it is disappointing that the lack of progress on our recommendations means that this early advantage has been lost.”