A shadow biosphere?

Where are we most likely to find truly alien life? The obvious (though difficult) place to look is on another planet or moon, whether that’s under the icy crust of Europa, near the poles of Mars, or, perhaps, on one of the planets we’re starting to discover orbiting distant stars. Alternatively, we might be able to make alien life for ourselves, through the emerging discipline of bottom-up synthetic biology. But what if alien life is to be found right under our noses, right here on earth, forming a kind of shadow biosphere? This provocative and fascinating hypothesis has been suggested by philosopher Carol Cleland and biologist Shelley Copley, both from the University of Colorado, Boulder, in their article “The possibility of alternative microbial life on Earth” (PDF, International Journal of Astrobiology 4, pp. 165-173, 2005).

The obvious objection to this suggestion is that if such alien life existed, we’d have noticed it by now. But, if it did exist, how would we know? We’d be hard pressed to find it simply by looking under a microscope – alien microbial life, if its basic units were structured on the micro- or nano- scale, would be impossible to distinguish just by appearance from the many forms of normal microbial life, or for that matter from all sorts of structures formed by inorganic processes. One of the surprises of modern biology is the huge number of new kinds of microbes that are discovered when, instead on relying on culturing microbes to identify them, one directly amplifies and sequences their nucleic acids. But suppose there exists a class of life-forms whose biochemistry fundamentally differs from the system based on nucleic acids and proteins that all “normal” life depends on – life-forms whose genetic information is coded in a fundamentally different way. There’s a strong assumption that early in the ancestry of our current form of biology, before the evolution of the current DNA based genetic code, a simpler form of life must have existed. So if descendants of this earlier form of life still exist on the earth, or if life on earth emerged more than once and some of the alternative versions still exist, detection methods that assume that life must involve nucleic acids will not help us at all. Just as, until the development of the polymerase chain reaction as a tool for detecting unculturable microbes, we have been able to detect only a tiny fraction of the microbes that surround us, it’s all too plausible that if alien life did exist around us we would not currently be able to detect it.

To find such alien life would be the scientific discovery of the century. We’d like to be able to make general statements about life in general – how it is to be defined, what are the general laws, not of biology but of all possible biologies, and, perhaps, how can one design and build new types of life. But we find it difficult to do this at the moment, as we only know about one type of life and it’s hard to generalise from a single example. Even if it didn’t succeed, the effort of seriously looking for alien life on earth would be hugely rewarding in forcing us to broaden our notions of the various, very different, manifestations that life might take.

Deja vu all over again?

Today the UK’s Royal Commission on Environmental Pollution released a new report on the potential risks of new nanomaterials and the implications of this for regulation and the governance of innovation. The report – Novel Materials in the Environment: The case of nanotechnology is well-written and thoughtful, and will undoubtedly have considerable impact. Nonetheless, four years after the Royal Society report on nanotechnology, nearly two years after the Council of Science and Technology’s critical verdict on the government’s response to that report, some of the messages are depressingly familiar. There are real uncertainties about the potential impact of nanoparticles on human health and the environment; to reduce these uncertainties some targeted research is required; this research isn’t going to appear by itself and some co-ordinated programs are needed. So what’s new this time around?

Andrew Maynard picks out some key messages. The Commission is very insistent on the need to move beyond considering nanomaterials as a single class; attempts to regulate solely on the basis of size are misguided and instead one needs to ask what the materials do and how they behave. In terms of the regulatory framework, the Commission was surprisingly (to some observers, I suspect) sanguine about the suitability and adaptability of the EU’s regulatory framework for chemicals, REACH, which, it believes, can readily be modified to meet the special challenges of nanomaterials, as long as the research needed to fill the knowledge gaps gets done.

Where the report does depart from some previous reports is in a rather subtle and wide-ranging discussion of the conceptual basis of regulation for fast-moving new technologies. It identifies three contrasting positions, none of which it finds satisfactory. The “pro-innovation” position calls for regulators to step back and let the technology develop unhindered, pausing only when positive evidence of harm emerges. “Risk-based” approaches allow for controls to be imposed, but only when clear scientific grounds for concern can be stated, and with a balance between the cost of regulating and the probability and severity of the danger. The “precautionary” approach puts the burden of proof on the promoters of new technology to show that it is, beyond any reasonable doubt, safe, before it is permitted. The long history of unanticipated consequences of new technology warn us against the first stance, while the second position assumes that the state of knowledge is sufficient to do these risk/benefit analyses with confidence, which isn’t likely to be the case for most fast moving new technologies. But the precautionary approach falls down, too, if, as the Commission accepts, the new technologies have the potential to yield significant benefits that would be lost if they were to be rejected on the grounds of inevitably incomplete information. To resolve this dilemma, the Commission seeks an adaptive system of regulation that seeks, above all, to avoid technological inflexibility. The key, in their view, is to innovate in a way that doesn’t lead society down paths from which it is difficult to reverse, if new information should arise about unanticipated threats to health or the environment.

The report has generated a substantial degree of interest in the press, and, needless to say, the coverage doesn’t generally reflect these subtle discussions. At one end, the coverage is relatively sober, for example Action urged over nanomaterials, from the BBC, and Tight regulation urged on nanotechnology, from the Financial Times. In the Daily Mail, on the other hand, we have Tiny but toxic: Nanoparticles with asbestos-like properties found in everyday goods. Notwithstanding Tim Harper’s suggestion that some will welcome this sort of coverage if it injects some urgency into the government’s response, this is not a good place for nanotechnology to be finding itself.

Nanocosmetics in the news

Uncertainties surrounding the use of nanoparticles in cosmetics made the news in the UK yesterday; this followed a press release from the consumer group Which? – Beauty must face up to nano. This is related to a forthcoming report in their magazine, in which a variety of cosmetic companies were asked about their use of nanotechnologies (I was one of the experts consulted for commentary on the results of these inquiries).

The two issues that concern Which? are some continuing uncertainties about nanoparticle safety and the fact that it hasn’t generally been made clear to consumers that nanoparticles are being used. Their head of policy, Sue Davies, emphasizes that their position isn’t blanket opposition: “We’re not saying the use of nanotechnology in cosmetics is a bad thing, far from it. Many of its applications could lead to exciting and revolutionary developments in a wide range of products, but until all the necessary safety tests are carried out, the simple fact is we just don’t know enough.” Of 67 companies approached for information about their use of nanotechnologies, only 8 replied with useful information, prompting Sue to comment: “It was concerning that so few companies came forward to be involved in our report and we are grateful for those that were responsible enough to do so. The cosmetics industry needs to stop burying its head in the sand and come clean about how it is using nanotechnology.”

On the other hand, the companies that did supply information include many of the biggest names – L’Oreal, Unilever, Nivea, Avon, Boots, Body Shop, Korres and Green People – all of whom use nanoparticulate titanium dioxide (and, in some cases, nanoparticulate zinc oxide). This makes clear just how widespread the use of these materials is (and goes someway to explaining where the estimated 130 tonnes of nanoscale titanium dioxide being consumed annually in the UK is going).

The story is surprisingly widely covered by the media (considering that yesterday was not exactly a slow news day). Many focus on the angle of lack of consumer information, including the BBC, which reports that “consumers cannot tell which products use nanomaterials as many fail to mention it”, and the Guardian, which highlights the poor response rate. The story is also covered in the Daily Telegraph, while the Daily Mail, predictably, takes a less nuanced view. Under the headline The beauty creams with nanoparticles that could poison your body, the Mail explains that “the size of the particles may allow them to permeate protective barriers in the body, such as those surrounding the brain or a developing baby in the womb.”

What are the issues here? There is, if I can put it this way, a cosmetic problem, in that there are some products on the market making claims that seem at best unwise – I’m thinking here of the claimed use of fullerenes as antioxidants in face creams. It may well be that these ingredients are present in such small quantities that there is no possibility of danger, but given the uncertainties surrounding fullerene toxicology putting products like this on the market doesn’t seem very smart, and is likely to cause reputational damage to the whole industry. There is a lot more data about nanoscale titanium dioxide, and the evidence that these particular nanoparticles aren’t able to penetrate healthy skin looks reasonably convincing. They deliver an unquestionable consumer benefit, in terms of screening out harmful UV rays, and the alternatives – organic small molecule sunscreens – are far from being above suspicion. But, as pointed out by the EU’s Scientific Committee on Consumer Products, there does remain uncertainty about the effect of titanium dioxide nanoparticles on damaged and sun-burned skin. Another issue recently highlighted by Andrew Maynard is the issue of the degree to which the action of light on TiO2 nanoparticles causes reactive and potentially damaging free radicals to be generated. This photocatalytic activity can be suppressed by the choice of crystalline structure (the rutile form of titanium dioxide should be used, rather than anatase), the introduction of dopants, and coating the surface of the nanoparticles. The research cited by Maynard makes it clear that not all sunscreens use grades of titanium dioxide that do completely suppress photocatalytic activity.

This poses a problem. Consumers don’t at present have ready access to information as to whether nanoscale titanium dioxide is used at all, let alone whether the nanoparticles in question are in the rutile or anatase form. Here, surely, is a case where if the companies following best practise provided more information, they might avoid their reputation being damaged by less careful operators.

Books that inspired me

I’ve just done a brief interview with a journalist for the BBC’s Focus magazine, about the three popular science books on nanotechnology that have most inspired me. I’ve already written about my nanotechnology bookshelf, but this time when I came to choose my three favourite books to talk about it turns out that they weren’t directly about nanotechnology at all. So here’s my alternative list of three non-nanotechnology books that I think all nanotechnologists could benefit from reading.

The New Science of Strong Materials by J.E. Gordon. To say that this is the best book ever written about materials science might not sound like that high praise, but I was hugely inspired by this book when I read it as a teenager, and every time I re-read it I find in it another insight. It was first published in 1968, long before anyone was talking about nanotechnology, but it beautifully lays out the principles by which one might design materials from first principles, relating macroscopic properties to the ways in which their atoms and molecules are arranged, principles which even now are not always as well known as they should be to people who write about nanotechnology. It’s a forward looking book, but it’s also full of incidental detail about the history of technology and the science that has underlain the skills of craftsmen using materials through the ages. It also looks to the natural world, discussing what makes materials of biological origin, like wood, so good.

The Self-Made Tapestry by Philip Ball. Part of the appeal of this is the beauty of the pictures, depicting the familiar natural patterns of clouds and sand-dunes, as well as the intricate nanoscale structure of self-assembled block copolymer phases and the shells of diatoms. But alongside the illustrations there is an accurate and clear account of the principles of self-assembly and self-organisation, that cause these intricate patterns to emerge, not through the execution of any centralised plan, but as a result of the application of simple rules describing the interactions of the components of these systems.

Out of Control by Kevin Kelly. This is also about emergence, but it casts its net much more widely, to consider swarm behaviour in insects, economics and industrial ecologies, and flocks of insect-like robots. The common theme is the idea that one can gain power by relinquishing control, harnessing the power of adaptation and evolution in complex systems in which non-trivial behaviour arises from the collective actions of many interacting objects or agents. The style is evangelical, perhaps to the extent of overselling some of these ideas, and some may, like me, not be wholly comfortable with the libertarian outlook that underlies the extension of these ideas into political directions, but I still find it hugely provocative and exciting.

In Richmond, VA

I’m making a brief visit to Virginia to talk to high school students and others about my book, Soft Machines. It’s in connection with a visiting author program for the Chesterfield County school system, initiated by Prof Krishan Aggarwal, from Virginia State University; each year high school students in the County schools get to read a science book in class and the author comes to discuss it with them. So far I’ve talked to students in Monacan High School and L.C. Bird High School, as well as spending an afternoon with the staff of Richmond’s MathScience Innovation Centre and local science teachers, who have been developing sets of lesson materials about nanotechnology for high school students, and have clearly been thinking hard about how to convey some of the developing concepts of nanotechnology to their students. I’m just about to go back to L.C. Bird High School for a public lecture and panel discussion. I’ve been hugely impressed so far by the thought that’s gone into the questions being put to me; it’s been a pleasure to interact with such an engaged group of students. My thanks to Krishan and to Dr Jeremy Lloyd, from the Chesterfield County schools, for setting this up and looking after me.

What’s meant by “food nanotechnology”?

A couple of weeks ago I took part in a dialogue meeting in Brussels organised by the CIAA, the Confederation of the Food and Drink Industries of the EU, about nanotechnology in food. The meeting involved representatives from big food companies, from the European Commission and agencies like the European Food Safety Association, together with consumer groups like BEUC, and the campaigning group Friends of the Earth Europe. The latter group recently released a report on food nanotechnology – Out of the laboratory and on to our plates: Nanotechnology in food and agriculture; according to the press release, this “reveals that despite concerns about the toxicity risks of nanomaterials, consumers are unknowingly ingesting them because regulators are struggling to keep pace with their rapidly expanding use.” The position of the CIAA is essentially that nanotechnology is an interesting technology currently in research rather than having yet made it into products. One can get a good idea of the research agenda of the European food industry from the European Technology Platform Food for Life. As the only academic present, I tried in my contribution to clarify a little the different things people mean by “food nanotechnology”. Here, more or less, is what I said.

What makes the subject of nanotechnology particularly confusing and contentious is the ambiguity of the definition of nanotechnology when applied to food systems. Most people’s definitions are something along the lines of “the purposeful creation of structures with length scales of 100 nm or less to achieve new effects by virtue of those length-scales”. But when one attempts to apply this definition in practise one runs into difficulties, particularly for food. It’s this ambiguity that lies behind the difference of opinion we’ve heard about already today about how widespread the use of nanotechnology in foods is already. On the one hand, Friends of the Earth says they know of 104 nanofood products on the market already (and some analysts suggest the number may be more than 600). On the other hand, the CIAA (the Confederation of Food and Drink Industries of the EU) maintains that, while active research in the area is going on, no actual nanofood products are yet on the market. In fact, both parties are, in their different ways, right; the problem is the ambiguity of definition.

The issue is that food is naturally nano-structured, so that too wide a definition ends up encompassing much of modern food science, and indeed, if you stretch it further, some aspects of traditional food processing. Consider the case of “nano-ice cream”: the FoE report states that “Nestlé and Unilever are reported to be developing a nano- emulsion based ice cream with a lower fat content that retains a fatty texture and flavour”. Without knowing the details of this research, what one can be sure of is that it will involve essentially conventional food processing technology in order to control fat globule structure and size on the nanoscale. If the processing technology is conventional (and the economics of the food industry dictates that it must be), what makes this nanotechnology, if anything does, is the fact that analytical tools are available to observe the nanoscale structural changes that lead to the desirable properties. What makes this nanotechnology, then, is simply knowledge. In the light of the new knowledge that new techniques give us, we could even argue that some traditional processes, which it now turns out involve manipulation of the structure on the nanoscale to achieve some desirable effects, would constitute nanotechnology if it was defined this widely. For example, traditional whey cheeses like ricotta are made by creating the conditions for the whey proteins to aggregate into protein nanoparticles. These subsequently aggregate to form the particulate gels that give the cheese its desirable texture.

It should be clear, then, that there isn’t a single thing one can call “nanotechnology” – there are many different technologies, producing many different kinds of nano-materials. These different types of nanomaterials have quite different risk profiles. Consider cadmium selenide quantum dots, titanium dioxide nanoparticles, sheets of exfoliated clay, fullerenes like C60, casein micelles, phospholipid nanosomes – the risks and uncertainties of each of these examples of nanomaterials are quite different and it’s likely to be very misleading to generalise from any one of these to a wider class of nanomaterials.

To begin to make sense of the different types of nanomaterial that might be present in food, there is one very useful distinction. This is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are covalently bonded, and thus are persistent and generally rather robust, though they may have important surface properties such as catalysis, and they may be prone to aggregate. Examples of engineered nanoparticles include titanium dioxide nanoparticles and fullerenes.

In self-assembled nanostructures, though, molecules are held together by weak forces, such as hydrogen bonds and the hydrophobic interaction. The weakness of these forces renders them mutable and transient; examples include soap micelles, protein aggregates (for example the casein micelles formed in milk), liposomes and nanosomes and the microcapsules and nanocapsules made from biopolymers such as starch.

So what kind of food nanotechnology can we expect? Here are some potentially important areas:

• Food science at the nanoscale. This is about using a combination of fairly conventional food processing techniques supported by the use of nanoscale analytical techniques to achieve desirable properties. A major driver here will be the use of sophisticated food structuring to achieve palatable products with low fat contents.
• Encapsulating ingredients and additives. The encapsulation of flavours and aromas at the microscale to protect delicate molecules and enable their triggered or otherwise controlled release is already widespread, and it is possible that decreasing the lengthscale of these systems to the nanoscale might be advantageous in some cases. We are also likely to see a range of “nutriceutical” molecules come into more general use.
• Water dispersible preparations of fat-soluble ingredients. Many food ingredients are fat-soluble; as a way of incorporating these in food and drink without fat manufacturers have developed stable colloidal dispersions of these materials in water, with particle sizes in the range of hundreds of nanometers. For example, the substance lycopene, which is familiar as the molecule that makes tomatoes red and which is believed to offer substantial health benefits, is marketed in this form by the German company BASF.

What is important in this discussion is clarity – definitions are important. We’ve seen discrepancies between estimates of how widespread food nanotechnology is in the marketplace now, and these discrepancies lead to unnecessary misunderstanding and distrust. Clarity about what we are talking about, and a recognition of the diversity of technologies we are talking about, can help remove this misunderstanding and give us a sound basis for the sort of dialogue we’re participating in today.

From micro to nano for medical applications

I spent yesterday at a meeting at the Institute of Mechanical Engineers, Nanotechnology in Medicine and Biotechnology, which raised the question of what is the right size for new interventions in medicine. There’s an argument that, since the basic operations of cell biology take place on the nano-scale, that’s fundamentally the right scale for intervening in biology. On the other hand, given that many current medical interventions are very macroscopic, operating on the micro-scale may already offer compelling advantages.

A talk from Glasgow University’s Jon Cooper gave some nice examples illustrating this. His title was Integrating nanosensors with lab-on-a-chip for biological sensing in health technologies, and he began with some true nanotechnology. This involved a combination of fluid handling systems for very small volumes with nanostructured surfaces, with the aim of detecting single biomolecules. This depends on a remarkable effect known as surface enhanced Raman scattering. Raman scattering is a type of spectroscopy that can detect chemical groups with what is normally rather low sensitivity. But if one illuminates metals with very sharp asperities, this hugely magnifies the light field very close to the surface, increasing sensitivity by a factor of ten million or so. Systems based on this effect, using silver nanoparticles coated so that pathogens like anthrax will stick to them, are already in commercial use. But Cooper’s group uses, not free nano-particles, but very precisely structured nanosurfaces. Using electron beam lithography his group creates silver split-ring resonators – horseshoe shapes about 160 nm across. With a very small gap one can get field enhancements of a factor of one hundred billion, and it’s this that brings single molecule detection into prospect.

On a larger scale, Cooper described systems to probe the response of single cells – his example involved using a single heart cell (a cardiomyocyte) to screen responses to potential heart drugs. This involved a pico-litre scale microchamber adjacent to an array of micron size thermocouples, which allow one to monitor the metabolism of the cell as it responds to a drug candidate. His final example was on the millimeter scale, though its sensors incorporated nanotechnology at some level. This was a wireless device incorporating an electrochemical blood sensor – the idea was that one would swallow this to screen for early signs of bowel cancer. Here’s an example where, obviously, smaller would be better, but how small does one need to go?

Nanoparticles down the drain

With significant amounts of nanomaterials now entering markets, it’s clearly worth worrying about what’s going to happen these materials after disposal – is there any danger of them entering the environment and causing damage to ecosystems? These are the concerns of the discipline of nano-ecotoxicology; on the evidence of the conference I was at yesterday, on the Environmental effects of nanoparticles, at Birmingham, this is an expanding field.

From the range of talks and posters, there seems to be a heavy focus (at least in Europe) on those few nanomaterials which really are entering the marketplace in quantity – titanium dioxide, of sunscreen fame, and nano-silver, with some work on fullerenes. One talk, by Andrew Johnson, of the UK’s Centre for Ecology and Hydrology at Wallingford, showed nicely what the outline of a comprehensive analysis of the environmental fate of nanoparticles might look like. His estimate is that 130 tonnes of nano-titanium dioxide a year is used in sunscreens in the UK – where does this stuff ultimately go? Down the drain and into the sewers, of course, so it’s worth worrying what happens to it then.

At the sewage plant, solids are separated from the treated water, and the first thing to ask is where the titanium dioxide nanoparticles go. The evidence seems to be that a large majority end up in the sludge. Some 57% of this treated sludge is spread on farmland as fertilizer, while 21% is incinerated and 17% goes to landfill. There’s work to be done, then, in determining what happens to the nanoparticles – do they retain their nanoparticulate identity, or do they aggregate into larger clusters? One needs then to ask whether those that survive are likely to cause damage to soil microorganisms or earthworms. Johnson presented some reassuring evidence about earthworms, but there’s clearly more work to be done here.

Making a series of heroic assumptions, Johnson made some estimates of how many nanoparticles might end up in the river. Taking a worst case scenario, with a drought and heatwave in the southeast of England (they do happen, I’m old enough to remember) he came up with an estimate of 8 micrograms/litre in the Thames, which is still more than an order of magnitude less than that that has been shown to start to affect, for example, rainbow trout. This is reassuring, but, as one questioner pointed out, one still might worry about the nanoparticles accumulating in sediments to the detriment of filter feeders.

Responsible nanotechnology – from discourse to practice

Like many academics, I’ve come back from my summer holiday only to leave immediately for a flurry of conferences. This year has been particularly busy. Last week saw me give a talk at a conference on phase separation in Cambridge last week, this week I’ve been in and out of a conference at Sheffield on thin polymer films, and next week I’m giving talks successively at one conference honouring Dame Julia Higgins and another on the environmental effects of nanoparticles. Yesterday, though, I found myself not amongst scientists, but in the Manchester Business School for a conference on Nanotechnology, Society and Policy.

There were some interesting and provocative talks looking at the empirical evidence for the development, or otherwise, of regional clusters with particular strengths in nanotechnology; under discussion was the issue of whether new industries based on nanotechnologies would inevitably be attracted to existing technological clusters like Silicon Valley and the Boston area, or whether the diverse nature of the technologies grouped under this banner would diffuse this clustering effect.

In the governance section, the University of Twente’s Arie Rip, one of the doyens of European science studies, spoke on the title “Discourse and practice of responsible nanotechnology development”. I must admit that I’d had a preconception that this would be a talk critical of the way so many people had adopted the rhetoric of “responsible development” simply as a way of promoting the subject and deflecting criticism. However, Rip’s message was actually rather more optimistic than this. His view was that, however much such talk begins as rhetoric, it does translate into real practice, and the interactions we’re seeing between technology and society, in the form of public dialogue, discussions between companies and campaigning groups, and the development of codes of practice really are creating “soft structures” and “soft law” that are beginning to have a real, and beneficial, effect on the way these technologies are being introduced.

Our faith in technology

The following essay is the pre-edited version of a piece of mine that will be published in a forthcoming book “Human Futures: Art in an Age of Uncertainty”, edited by Andy Miah and published by FACT (Foundation for Art and Creative Technology) & Liverpool University Press.

The days when our society was bound together by a single shared faith seem long gone. But at some level, most of us share a faith in technology, a faith that next year we’ll be able to buy a faster computer, a digital camera with more megapixels, or an MP3 player that holds more songs, and it will cost us less. For some, this is part of a broader faith in the power of science and technology both to deliver a better life and to give a coherent way of thinking about the world. Others might have a more nuanced view, seeing the results of techno-science as a very much a mixed blessing, and accepting the gadgets, while rejecting the scientific worldview. For better or worse, though, we’re in the state we’re in now because of technology, and indeed we existentially depend on it. But it’s equally clear that the technology we have can’t be sustained. Whatever happens, this tension must be resolved; whether we believe in progress or not, things can’t go on as they are.

There’s a new set of emerging technologies to bring these arguments into focus. Nanotechnology manipulates matter at the level of atoms and molecules, and promises a new level of control over the material world[i]. Biology has already moved from being an essentially descriptive and explanatory activity, and it’s now taking on the character of a project to intervene in and reshape the living world. Up to now, the achievements of biotechnology have come from fairly modest modifications to biological systems, but a new discipline of synthetic biology is currently emerging, with the much more ambitious goal of a wholesale reengineering of living systems for human purposes, and possibly creating entirely novel living systems. In large organisms like humans, we’re starting to appreciate the complexities of communications within and between the cells that together make up the organism; it’s this understanding of the rich social lives of cells that will make possible the development of stem cell therapies and tissue engineering. Information technology both enables and is enabled by these advances; it’s computing power that underlay the decoding of the human genome and which drives the development of sciences like bioinformatics, that are giving us the tools to understand the informational basis of life. The other side of the coin is that it is developments in nanotechnology that are what drives the relentless increase in computing power that is obvious to every consumer; in the near future similar advances will contribute to the growing importance of the computer as an invisible component of the fabric of life – ubiquitous computing. Perhaps of most significance of all to our conceptions of what it means to be human, cognitive science expands our understanding of how the brain works as an organ of information processing, prompting dreams both of a reductionist understanding of consciousness and the possibility of augmenting the functionality of the brain.

What will all these bewildering developments mean for the way the human experience evolves over the coming decades? Let’s get some perspective by reminding ourselves of technology’s role in getting us to where we are now.

No-one can doubt that our lives now are hugely different to the lives of our forbears two hundred years ago, and that this dramatic transformation has come about largely through new technologies. The world of material things – food, buildings, clothes, tools – has been transformed by new materials and processes, with mass production bringing complex artefacts within reach of everyone. Information and communications have been transformed; first telephones removed the need for physical presence for two-way communication, then computers and the internet have come together to give unprecedented ways of storing, accessing and processing a vast universe of information. Now all these technologies have converged and become ubiquitous through mobile telephony and wireless networking. Meanwhile life expectancy has doubled, through a combination of material sufficiency, the development of scientific medicine, and the implementation of public health measures. We’ve started to assert a new control over human biology – we already take for granted control over our reproduction through the contraceptive pill and assisted fertility, and we are beginning to anticipate a future in which we’ll have access to bodily repairs and spare parts, through the promise of tissue engineering and stem cell therapy.

It’s easy to be dazzled by all that technology has achieved, but it’s important to remember that these developments have all been underpinned by a single factor – the availability of easily accessible, concentrated forms of energy. None of this would have happened if we had not been able to fuel our civilisation by extracting black stuff from the ground and burning it. In 1800, the total energy consumption in the UK amounted to about 20 GJ per person per year. By 1900 this figure had increased by more than a factor of five, and today we use 175 GJ. Since this is predominantly in the form of fossil fuels, one graphic way of restating this figure is that it amounts to the equivalent of more than 4 tonnes of oil per person per year[ii].

It’s obvious to everyone that they use fossil fuel energy when they put petrol in their car, or turn the house heating on. But it’s important to appreciate how much energy is embodied in the material things around us, in our built environment and the artefacts we use. It takes a tonne and a quarter of oil to make ten tonnes of cement, and eight and a quarter tonnes of oil to make ten tonnes of steel. For a really energy hungry material like aluminium, it takes nearly four tonnes of oil to produce a single tonne. And if we build with oil, and make things out of oil, in effect we eat oil too, thanks to our reliance on intensive agriculture with its high energy inputs. To grow ten tonnes of wheat (roughly the output of a hectare, in the most favourable circumstances) takes 200 kg of artificial fertiliser, which itself embodies 130 kg of oil, as well as the input of another 200 kg of oil in other energy inputs.

Some people have the conceit that we’ve moved beyond a dirty old economy of power stations and steel works to a new, weightless economy based on processing information. Nothing could be further from the truth; in addition to our continuing dependence on material things, with their substantial embodiment of energy, information and communications technology itself needs a surprisingly large energy input. The ICT industry in the UK is actually responsible for a comparable share of carbon dioxide generation to aviation. The energy consumption of that giant of the modern information economy, Google, is a closely guarded secret; what is clear, though, is that the choice of location of its data centres is driven by the need to be close to reliable, cheap power, like hydroelectric power plants or nuclear power stations, in much the same way that aluminium smelters are sited.

Perhaps the most complex and interesting relationship is that between energy use and measures of health and physical well-being, like infant mortality and life expectancy. It’s clear, both from the record of history and the correlation of these figures with energy use for less well developed countries at the moment, that there’s a strong correlation between per capita energy use and life expectancy, at the lower end of the range. It seems that increasing per capita energy use up to 60 or 70 GJ per year brings substantial benefits, presumably by ensuring that people are reasonably well nourished, and allowing basic public health measures like access to clean water and having a working sewerage system. Further improvements result from increasing energy consumption above this, presumably by enabling increasingly comprehensive medical services, but beyond a per capita consumption around 110 GJ a year there is very little correlation between energy use and life expectancy. The lesson of this is that, while it is clear that material insufficiency is bad for one’s health, sometimes excess can have its own problems.

This emphasis on our dependence on fossil fuel energy should make it clear, whatever the prospects for exciting new developments in the future, there is a certain fragility to our situation. The large scale use of fossil fuels has come at a price – in man-made climate change – whose full dimensions we don’t yet know, and we are once again seeing problems of pressures on resources like food and fuel. Food shortages and bad harvests remind us that technology hasn’t allowed us to transcend nature – we’re still dependent on the rains arriving at the right time in the right quantity. We’ve influenced the climate, on which we depend, but in ways that are uncontrolled and unpredicted. The lessons of history teach us that a societal collapse is a real possibility, and one of the consequences of this would be an abrupt end to the hopes of further technological progress[iii].

We can hope that these emerging technologies themselves can help avert this kind of disastrous outcome. The only renewable energy source that realistically has the capacity to underpin a large-scale, industrial society is solar energy, but current technologies for harvesting this are too expensive and cannot be produced on anything like the scales needed to make a serious dent in the world’s energy needs. There is a real possibility that nanotechnology will change this situation, making possible the use of solar energy on very large scales. Other developments – for example, in batteries and fuel cells – would then allow us to store and distribute this energy, while we could anticipate a further continuation of the trends that allow us to do more with less, reducing the energy input required to achieve a given level of prosperity.

Computers will probably go on getting faster, with the current exponential growth of computing power (Moore’s law) continuing for perhaps ten more years. After that, we’re relying on new developments in nanotechnology to allow us to keep that trajectory going. Less obvious, but in some ways more interesting, will be the ways computing power becomes seamlessly integrated into the material fabric of life. One of the areas this will impact is medicine; developments in sensors should mean that we diagnose diseases earlier and can personalise treatments to the particularities of an individual’s biology. Therapies, too, will become more effective and less prone to side-effects, thanks to nanoscale delivery devices for targeting drugs and the development of engineered replacement tissues and organs.

So perhaps our optimistic goal for the next fifty years should be that these emerging technologies contribute to making a prosperous global society on a sustainable basis. A steady world population should universally enjoy long and pain-free lives at a decent standard of living, this being underpinned by sustainable technologies, in particular renewable energy from the sun, and supported by a ubiquitous (but largely invisible) infrastructure of ambient computing, distributed sensing, and responsive materials.

For some, this level of ambition for technology isn’t enough. Instead they seek transcendence through technology and, through human enhancement, our transfiguration to qualitatively different and superior types of beings. It’s the technological trends we’ve discussed already that are invoked to support this view, but with a particularly superlative vision of the potential of technology[iv]. For example, there’s an extrapolation from the existing developments of nanotechnology, via Drexler’s conception of atom-by-atom nanomanufacturing[v], to a world of superabundance, in which any material object is available at no cost. From modern medicine, and the future promise of nanomedicine, there’s the promise of superlongevity – the idea that a “cure” for the “disease” of ageing is imminent, and the serious suggestion that people alive today might live for a thousand years[vi]. From some combination of the development of ever-faster computers and the possibility of the augmentation of human mental capabilities by implants, comes the idea that we will shortly create a greater than human intelligence, either as a purely artificial intelligence in a computer, or through a radical enhancement of a human mind. This superintelligence is anticipated to be the greatest superlative technology of all, as by applying its own intelligence to itself it will be able rapidly and recursively to improve all these technologies, including its own intelligence. This will lead to a moment of ineffably rapid technological and societal change called, by its devotees, the Singularity[vii].

The technical bases for these superlative predictions are strongly contested by researchers in the relevant fields[viii]. This doesn’t seem to have a great deal of impact on the vehemence with which such views are held by those (largely online) communities transhumanists and singularitarians for whom these shared beliefs define a shared identity. The essentially eschatological character of singularitarian beliefs is obvious – it’s this that is well captured in the dismissive epithet “the rapture of the nerds”. While some proponents of these views have an aggressively rational, atheist outlook, others are explicit in highlighting a spiritual dimension to their belief, in a cosmological outlook that seems to owe something, whether consciously or unconsciously, to the Catholic mystic Teilhard de Chardin[ix]. Belief in the singularity, then, as well as being a symptom of a particular moment of rapid technological change, should perhaps be placed in that tradition of millennial, utopian thinking that’s been a recurring feature in Western thought for many centuries.

For me, the main sin of singularitarianism is one shared much more widely – that is the idea of technological determinism. This is the idea that technology has an autonomous, predictable, momentum of its own, largely beyond social and political influence, and that societal and economic changes are governed by these technological developments. It’s the everyday observation of the rapidity of technological change that gives this view such force; what keeps new, faster computers appearing in the shops on schedule is Moore’s law. This is the observation, made in 1965 by Gordon Moore, the founder of the microprocessor company Intel, that computer power is growing exponentially, with the number of transistors on a single chip roughly doubling every two years. To futurists like Kurzweil, Moore’s law is simply one example of a more general rule of exponential technological growth. But simply to give Moore’s observation the name “law” is to mistake its character in fundamental ways. It isn’t a law; it is a self-fulfilling prophecy, a way of coordinating and orchestrating the deliberate and planned action of the many independent actors in the semiconductor industry and in commercial and academic research and development, in the pursuit of a common goal of continuous incremental improvement in their products. Moore’s law is not a law describing the way technology develops as some kind of independent force, it is a tool for coordinating and planning human action.

We need to be very aware that technology need not advance at all; it depends on a set of stable societal and economic arrangements that aren’t by any means guaranteed. If there’s a collapse of society due to resource shortage or runaway climate change that will bring an abrupt end to Moore’s law and to all kinds of other progress. But a more optimistic view is to assert that we aren’t slaves to technology as an external, autonomous force; instead, technology is a product of society and our aspiration should be that it is directed by society to promote widely shared goals.

i For an overview, see “Soft Machines: nanotechnology and life”, Richard A.L. Jones, Oxford University Press (2004).

ii An excellent overview of the role of energy in modern society can be found in “Energy in Nature and Society”, Vaclav Smil, MIT Press, Cambridge MA, 2008, on which the subsequent discussion extensively draws.

iii This point is eloquently made by Jared Diamond in “Collapse: how societies choose to fail or succeed”, Viking (2005).

iv This characterisation of the “Superlative technology discourse” owes much to Dale Carrico.

v K.E. Drexler, “Engines of Creation: the coming era of nanotechnology” (Anchor, 1987) and K.E. Drexler, “Nanosystems: molecular machinery, manufacturing and computation” (Wiley, 1992).

vi Aubrey de Gray and Michael Rae, “Ending Ageing, the rejuvenation strategies that could reverse human ageing in our lifetime” (St Martins Press, 2007)

vii Ray Kurzweil, “The Singularity is Near: when humans transcend biology” (Penguin, 2006)

viii See, for example, the essays in a special issue of IEEE Spectrum: “The Singularity: a special report”, June 2008 , including my own piece “Rupturing the Nanotech Rapture”. For a critique of proposals for radical life extension, see “Science fact and the SENS agenda”, Warner et al, EMBO reports 6, 11, 1006-1008 (2005) (subscription required).

ix For an example, consider this quotation from Ray Kurzweil’s “The Singularity is Near”: “Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity and greater levels of subtle attribures such as love. In every monotheistic trandition God is likewise described as all of these qualities, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity and infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably toward this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking”.