Questions and answers

Tomorrow I am going to Birmingham to take part in a citizens’ jury on the use of nanotechnology in consumer products, run by the consumer organisation Which? They are running a feature on nanotechnology in consumer products in the New Year, and in advance of this they asked me, and a number of other people, a number of questions. Here are my answers.

How are nanomaterials created?

A wide variety of ways. A key distinction to make is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are hard, covalently bonded clusters of atoms which, fundamentally, can be made in two ways. You can break down bigger particles by milling them, or you can make the particles by a chemical reaction which precipitates them, either from solution or from a vapour (a bit like making smoke with very fine particles). Examples of engineered nanoparticles are the nanoscale titanium dioxide particles used for some sunscreens, and the fullerenes, forms of carbon nanoparticles that can be thought of as well-bred soot. Because nanoparticles have such a huge surface area relative to their mass, it’s often very important to control the properties of their surfaces (if for no other reason than that most nanoparticles have a very strong tendency to want to stick together, thus stopping being nanoparticles and losing the properties you were presumably interested in them having in the first place). So, it would be very common to make the nanoparticle with an outer coating of molecules that might make it less chemically reactive.

Self-assembly, on the other hand, is a process by which rather soft and mutable nano-structures are formed by particular types of molecules sticking together in small clusters. The classic example of this is soap. Soap molecules have a tail that is repelled from water, and a head that is soluble in water. In a dilute solution in water they make the best of these conflicting tendencies by arranging themselves in clusters of maybe 50 or so molecules, with the headgroups on the outside and the tails shielded from the water in the middle. These nanoparticles are called micelles. Biology relies extensively on self-assembly to construct the nanostructures that all living organisms are made of, including ourselves. For this reason, most food is naturally nanostructured. For example, in milk protein molecules called caseins form self-assembled nanoparticles, and traditional operations like cheese-making involve making these nanoparticles stick together to make something more solid. Of course, we don’t call cooking nanotechnology, because we don’t intentionally manipulate the nanostructure of the foods, even if this is what happens without us knowing about it, but, armed with modern techniques for studying the nanoscale structure of matter, people are increasingly seeking to make artificial nanostructures for applications in food and health. An example of an artificial self-assembled nanostructure that’s becoming important in medicine is the liposome (small ones are sometimes called nanosomes) – here one has soap-like molecules that arrange themselves into sheets exactly two molecules thick (a common material would be the phospholipid lecithin, obtained from soya beans, that is currently widely used as a food emulsifier, for example being an important ingredient of chocolate. If one can arrange the sheet to fold round onto itself you get a micro- or nano- scale bag that you can fill with molecules that you want to protect from the environment (or vice versa).

Can you tell us about the existing and expected applications of developments in nanotechnology in the areas of food and health (including medical applications)?

In food applications, the line separating conventional food processing to change the structure and properties of food and nanotechnology is rather blurred. For example, it was reported that an ice cream company was using nanotechnology to make low fat ice cream; this probably involved a manipulation of the size of the natural fat particles in the ice cream. This really isn’t very different from conventional food processing, the only difference being that modern instrumentation makes it possible for the food scientists involved to see what they are doing to the nanoscale structure. This sort of activity will, I’m sure, increase in the future, driven largely by the perceived market demand for more satisfying low fat food.

One area that is very important in health, and may become important in food, is the idea of wrapping up and delivering particular types of molecules. In medicine, some drugs, particularly the anti-cancer drugs used in chemotherapy, are actually quite toxic and lead to serious side-effects. If the molecules could be wrapped up and only released at the point at which they were needed – the tumour, in the case of an anticancer drug, then the side effects would be much reduced and the drug would be much more effective. This is beginning to happen, with drugs being wrapped up in liposomes for delivery. Another way in which nanotechnology can help in medicine is for drugs which can’t easily be dissolved, and thus can’t be easily introduced into the body. These can be prepared as nanoparticles, in which form the molecules can be absorbed by the body (a new anti-breast-cancer drug – Abraxane – is in this category). In food, in the future additives which are believed to be good for the health (so-called nutriceuticals) may be added to food in this way.

Other applications in health are in fast diagnostic tests. The idea here is that, instead of a GP having to send off a patient’s blood sample for a test to detect certain bacteria or biochemical abnormalities, and having to wait a week or so for the result to come back, nanotechnology would make possible a simple and reliable test that could be done on the spot. Looking further ahead, it’s possible to imagine a device that automatically tested for some abnormality, and then if it detected it automatically released a drug to correct it (for example, a diabetic might have a device implanted under their skin that automatically tested blood sugar levels and released the right amount of insulin in response).

Another area that is in tissue engineering – the growing of artificial tissues and organs to replace those damaged by disease or injury. Here it’s important to have a “scaffold” on which to grow human cells (ideally the patient’s own cells) in a way that they make a working organ. Currently growing replacement skin for burn victims is an a fairly advanced state of development.

Are manufacturers required to disclose the presence of nanomaterials on their labelling?

Currently, no.

What are the risks or concerns about using manufactured nanomaterials in health or food products?

There are concerns that some engineered nanoparticles might be more toxic than the same chemical material present in larger particles, both because the increased surface area might make them more reactive, and because they might be able to penetrate into tissues and cells more easily than larger particles.

Are some nanomaterials more risky than others?

This is very likely. Engineered nanoparticles, made from covalently bonded inorganic materials, seem the most likely to cause concern, but even among these it is important to consider each type of nanoparticle individually. Moreover, it may well be that the dangers posed by nanoparticles might be altered by the surface coatings they are given.

Are some applications of nanotechnology more risky than others?

Yes. In my opinion the biggest risk is in the use of engineered nanoparticles in situations in which they could be ingested or breathed in. The control of naturally occurring nanostructure in foods, the use of self-assembled objects like liposomes, and the kind of nanotechnology that is likely to be used in diagnostic devices, should present few if any risks.

In your opinion, should consumers be concerned about the use of manufactured nanomaterials in health or food products?

Somewhat, but not very. The key dangers come from the potential use of engineered nanoparticles without adequate information about their toxicity. In principle food additive regulations don’t generally discriminate by size. For example, a material like titanium dioxide, that is a permitted food additive (E171), could be used in a nanoscale form without additional testing. In principle it is possible to specify permitted size ranges for particles – this is done for microcrystalline cellulose – so this measure should be extended to other materials that could be used in the form of engineered nanoparticles on the basis of testing that discriminates between particles of different sizes.

If any, what protections need to be put in place?

The government should act on the recommendations of the March 2007 report by the Council of Science and Technology

On the radio

The BBC World Service program World Business Review devoted yesterday’s program to nanotechnology, with a half-hour discussion between me, Michio Kaku and Peter Kearns from the OECD. I haven’t managed to bring myself to listen to it yet, and as it’s difficult to get a very accurate impression of a radio program while you are recording it I’ll make no comment about it. You can listen to it through the internet from this link (this will work until next Saturday).

Fantastic Voyage vs Das Boot

New Scientist magazine carries a nice article this week about the difficulties of propelling things on the micro- and nano- scales. The online version of the article, by Michelle Knott, is called Fantastic Voyage: travel in the nanoworld (subscription required); we’re asked to “prepare to dive into the nanoworld, where water turns to treacle and molecules the size of cannonballs hurl past from every direction.”

The article refers to our work demonstrating self-motile colloid particles, which I described earlier this year here – Nanoscale swimmers. Also mentioned is the work from Tom Mallouk and Ayusman Sen at Penn State; very recently this team demonstrated an artificial system that shows chemotaxis; that is, it swims in the direction of increasing fuel concentration, just as some bacteria can swim towards food.

The web version of the story has a title that, inevitably, refers back to the classic film Fantastic Voyage, with its archetypal nanobot and magnificent period special effects, in which the nanoscale environment inside a blood vessel looks uncannily like the inside of a lava lamp. The title of the print version, though, Das (nano) Boot, references instead Wolfgang Peterson’s magnificently gloomy and claustrophobic film about a German submarine crew in the second world war – as Knott concludes, riding in nanoscale submarines is going to be a bumpy business.

Home again

I’m back from my week in Ireland, regretting as always that there wasn’t more time to look around. After my visit to Galway, I spend Wednesday in Cork, visiting the Tyndall National Institute and the University, where I gave a talk in the Physics Department. Thursday I spent at the Intel Ireland site at Leixlip, near Dublin; this is the largest Intel manufacturing site outside the USA, but I didn’t see very much of it apart from getting an impression of its massive scale, as I spent the day talking about some rather detailed technical issues. On Friday I was in the Physics department of Trinity College, Dublin.

Ireland combines being one of the richest countries in the world (with a GDP per person higher than both the USA and the UK) with a recent sustained high rate of economic growth. Up until relatively recently, though, it has not spent much on scientific research. That’s changed in the last few years; the Government agency Science Foundation Ireland, has been investing heavily. This investment has been carried out in a very focused way, concentrating on biotechnology and information technology. The evidence for this investment was very obvious in the places I visited, both in terms of facilities and equipment and in people, with whole teams being brought in in important areas like photonics. The aim is clearly to emulate the success of the other small, rich countries of Europe, like Finland, Sweden, the Netherlands and Switzerland, whose contributions to science and technology are well out of proportion to their size

Not that there’s a lack of scientific tradition in Ireland, though – the lecture theatre I spoke in Trinity College was the same one in which Schrödinger delivered his famous series of lectures What is life?”, and as a keepsake I was given a reprint of the lectures at Trinity given by Richard Helsham and published in 1739, which constitute one of the first textbook presentations of the new Newtonian natural philosophy. My thanks go to the Institute of Physics Ireland, and my local hosts Ray Butler, Sile Nic Chormaic and Cormac McGuinness.


I’m in Ireland for the week, at the invitation of the Institute of Physics Ireland, giving talks at a few universities here. My first stop was at the National University of Ireland, Galway. In addition to the pleasure of spending a bit of time in this very attractive country, it’s always interesting to get a chance to learn what people are doing in the departments one visits. The physics department at Galway is small, but it’s received a lot of investment recently; the Irish government has recently started spending some quite substantial sums on research, recognising the importance of technology to its currently booming economy.

One of the groups at Galway, run by Chris Dainty, does applied optics, and one of the projects I was shown was about using adaptive optics to correct the shortcomings of the human eye. Adaptive optics was originally developed for astronomy (and some defense applications as well) – the idea is to correct for a rapidly changing distortion of an image on the fly, using a mirror whose shape can be changed. Although the implementations of adaptive optics are very sophisticated and very expensive, we’re starting to see much cheaper implementations of the principle. For example, some DVD players now have an adaptive optics element to correct for DVDs that don’t quite meet specifications. One idea that has excited a number of people is the hope that one might be able to use adaptive optics to achieve better than perfect vision; after all, the eye, considered as an optical system is very far from perfect, and even after one has corrected the simple failings of focus and astigmatism with glasses there are many higher order aberrations due to the eye’s lens being very far from the perfect shape. The Galway group does indeed have a system that can correct these aberrations, but the lesson from this work isn’t entirely what might first expect.

What the work shows is that adaptive optics can indeed make a significant improvement to vision, but only in those conditions in which the pupil is dilated. As photographers know, distortions due to imperfections in a lens are most apparent at large apertures, and stopping down the aperture always has the effect of forgiving the lens’s shortcomings. In the case of the eye, in normal, daytime conditions the pupil is rather narrow, so it turns out that adaptive optics only helps if the pupil is dilated, as would happen under the influence of some drugs. Of course, at night, the pupil is open wide to let as much light as possible. So, does adaptive optics help you get supervision in dark conditions? Actually, it turns out that it doesn’t – in the dark, you form the image with the more sensitive rod cells, rather than the cones that work in brighter light. The rods are more widely spaced, so it turns out that effectively the sharpness of the image you see at night isn’t limited by the shortcomings of the lens, but by the effective pixel size of the detector. So, it seems that super-vision through adaptive optics is likely to be somewhat less useful than it first appeared.

Nanotechnology and the developing world

On Wednesday, I spent the day in London, at the headquarters of the think-tank Demos, who were running a workshop on applications of nanotechnology in the developing world. Present were other nano-scientists, people from development NGOs like Practical Action and WaterAid, and industry representatives. I was the last speaker, so I was able to reflect some of the comments from the day’s discussion in my own talk. This, more or less, is what I said:

When people talk about nanotechnology and the developing world, what we generally hear is one of two contrasting views – “nanotechnology can save the developing world” or “nanotechnology will make rich/poor gap worse”. We need to move beyond this crude counterpoint.

The areas in which nanotechnology has the potential to help the developing world are now fairly well rehearsed. Here’s a typical list –
• Cheap solar power
• Solutions for clean water
• Inexpensive diagnostics
• Drug release
• Active ingredient release – pesticides for control of disease vectors

What these have in common is that in each case you could see in principle that they might make a difference, but it isn’t obvious that they will. Not least of the reasons for this uncertainty is because we know that many existing technological solutions to obvious and pressing problems, many much more simple and widely available than these promised nanotechnology solutions, haven’t been implemented yet. This is not to say that we don’t need new technology – clearly, on a global scale, we very much do. Throughout the world we are existentially dependent on technology, but the technology we have is not sustainable and must be superceded. Arguably, though, this is more a problem for rich countries.

Amongst the obvious barriers, there is profound ignorance in the scientific/technical communities of the real problems of the developing world, and of the practical realities that can make it hard to implement technological solutions. This was very eloquently expressed by Mark Welland, the director of the Cambridge Nanoscience Centre, who has recently been spending a lot of time working with communities and scientists in Egypt and other middle eastern countries. There are fundamental difficulties in implementing solutions in a market-driven environment. Currently we rely on the market – perhaps with some intervention, by governments, NGOs or foundations, of greater or lesser efficacy – to take developments from the lab into useful products. To put it bluntly, there is a problem in designing a business model for a product whose market consists of people who haven’t got much money, and one of the industry representatives described a technically excellent product whose implementation has been stranded for just this reason.

Ways of getting round this problem include the kind of subsidies and direct market interventions now being tried for the distribution of the new (and expensive) artemisinin-based combination therapies for malaria (see this article in the Economist). The alternative is to put one’s trust in the process of trickle-down innovation, as Jeremy Baumberg called it; this is the hope that technologies developed for rich-country problems might find applications in the developing world. For example, controlled pesticide release technologies marketed to protect Florida homes from termites might find applications in controlling mosquitos, or water purification technology developed for the US military might be transferred to poor communities in arid areas.

Another challenge is the level of locally available knowlege and capacity to exploit technology in developing countries. One must ensure that technology is robust, scalable and can be maintained with local resources. Mark Welland reminds us that generating local solutions with local manpower, aside from its other benefits, helps build educational capacity in those countries.

On the negative side of the ledger, people point to problems like:
• The further lock-down of innovation through aggressive intellectual property regimes,
• The possibility of environmental degradation due to dumping of toxic nanoparticles
• Problems for developing countries depending on commodities from commodity substitution as a result of new technologies.

These are all issues worth considering, but they aren’t really specific to nanotechnology, but are more general consequences of the way new technology is developed and applied. It’s worth making a few more general comments about the cultures of science and technology.

It needs to be stressed first that science is a global enterprise, and it is a trans-national culture that is not very susceptible to central steering. We’re in an interesting time now, with the growth of new science powers: China and India have received the most headlines, but shouldn’t neglect other countries like Brazil and South Africa that are consciously emphasising nanotechnology as they develop their science base. Will these countries focus their science efforts on the needs of industrialisation and their own growing middle classes, or does their experience put them in a better position to propose realistic solutions to development problems? Meanwhile, in more developed countries like the UK, it is hard to overstate the emphasis the current political climate puts on getting science to market. The old idea of pure science leading naturally to applied science that then feeding into wealth-creating technology – the “linear model” – is out of favour both politically and intellectually, and we see an environment in which the idea of “goal-oriented” science is exalted. In the UK this has been construed in a very market focused way – how can we generate wealth by generating new products? “Users” of research – primarily industry, with some representation from government departments, particularly those in the health and defense sectors, have an increasingly influential voice in setting science policy. One could ask, who represents the potential “users” of research in the developing world?

One positive message is that there is a lot of idealism amongst scientists, young and old, and this idealism is often a major driving force for people taking up a scientific career. The current climate, in which the role of science in underpinning wealth creation is emphasised above all else, isn’t necessarily very compatible with idealism. There is a case for more emphasis on the technology that delivers what people need, as well as what the market wants. In practical terms, many scientists might wish to spend time on work that benefits the developing world, but career pressures and institutional structures make this difficult. So how can we harness the idealism that motivates many scientists, while tempering it with realism about the institutional structures that they live in and understanding the special characteristics that make scientists good at their job?

Less than Moore?

Some years ago, the once-admired BBC science documentary slot Horizon ran a program on nanotechnology. This was preposterous in many ways, but one sequence stands out in my mind. Michio Kaku appeared in front of scenes of rioting and mayhem, opining that “the end of Moore’s Law is perhaps the single greatest economic threat to modern society, and unless we deal with it we could be facing economic ruin.” Moore’s law, of course, is the observation, or rather the self-fulfilling prophecy, that the number of transistors on an integrated circuit doubles about every two years, with corresponding exponential growth in computing power.

As Gordon Moore himself observes in a presentation linked from the Intel site, “No Exponential is Forever … but We can Delay Forever (2 MB PDF). Many people have prematurely written off the semiconductor industry’s ability to maintain, over forty years, a record of delivering a nearly constant, year on year, percentage shrinking in circuits and increase in computing power. Nonetheless, there will be limits to how far the current CMOS-based technology can be pushed. These limits could arise from fundamental constraints of physics or materials science, or from engineering problems like the difficulties of managing the increasingly problematic heat output of densely packed components, or simply from the economic difficulties of finding business models that can make money in the face of the exponentially increasing cost of plant. The question, then, is not if Moore’s law, for conventional CMOS devices, will run out, but when.

What has underpinned Moore’s law is the International Technology Roadmap for Semiconductors, a document which effectively choreographs the research and development required to deliver the continual incremental improvements on our current technology that are needed to keep Moore’s law on track. It’s a document that outlines the requirements for an increasingly demanding series of linked technological breakthroughs as time marches on; somewhere between 2015 and 2020 a crunch comes, with many problems for which solutions look very elusive. Beyond this time, then, there are three possible outcomes. It could be that these problems, intractable though they look now, will indeed be solved, and Moore’s law will continue through further incremental developments. The history of the semiconductor industry tells us that this possibility should not be lightly dismissed; Moore’s law has already been written off a number of times, only for the creativity and ingenuity of engineers and scientists to overcome what seemed like insuperable problems. The second possibility is that a fundamentally new architecture, quite different from CMOS, will be developed, giving Moore’s law a new lease of life, or even permitting a new jump in computer power. This, of course, is the motivation for a number of fields of nanotechnology. Perhaps spintronics, quantum computing, molecular electronics, or new carbon-based electronics using graphene or nanotubes will be developed to the point of commercialisation in time to save Moore’s law. For the first time, the most recent version of the semiconductor roadmap did raise this possibility, so it deserves to be taken seriously. There is much interesting physics coming out of laboratories around the world in this area. But none of these developments are very close to making it out of the lab into a process or a product, so we need to at least consider the possibility that it won’t arrive in time to save Moore’s law. So what happens if, for the sake of argument, Moore’s law peters out in about ten years time, leaving us with computers perhaps one hundred times more powerful than the ones we have now that take more than a few years to become obsolete. Will our economies collapse and our streets fill with rioters?

It seems unlikely. Undoubtedly, innovation is a major driver of economic growth, and the relentless pace of innovation in the semiconductor industry has contibuted greatly to the growth we’ve seen in the last twenty years. But it’s a mistake to suppose that innovation is synonymous with invention; new ways of using existing inventions can be as great a source of innovation as new inventions themselves. We shouldn’t expect that a period of relatively slow innovation in hardware would mean that there would be no developments in software; on the contrary, as raw computing power gets less superabundant we’d expect ingenuity in making the most of available power to be greatly rewarded. The economics of the industry would change dramatically, of course. As the development cycle lengthened the time needed to amortise the huge capital cost of plant would stretch out and the business would become increasingly commoditised. Even as the performance of chips plateaued, their cost would drop, possibly quite precipitously; these would be the circumstances in which ubiquitous computing truly would take off.

For an analogy, one might want to look a century earlier. Vaclav Smil has argued, in his two-volume history of technology of the late nineteenth and twentieth century (Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact and Transforming the Twentieth Century: Technical Innovations and Their Consequences ), that we should view the period 1867 – 1914 as a great technological saltation. Most of the significant inventions that underlay the technological achievements of the twentieth century – for example, electricity, the internal combustion engine, and powered flight – were made in this short period, with the rest of the twentieth century being dominated by the refinement and expansion of these inventions. Perhaps we will, in the future, look back on the period 1967 – 2014, in a similar way, as a huge spurt of invention in information and communication technology, followed by a long period in which the reach of these inventions continued to spread throughout the economy. Of course, this relatively benign scenario depends on our continued access to those things on which our industrial economy is truly existentially dependent – sources of cheap energy. Without that, we truly will see economic ruin.

The uses and abuses of speculative futurism

My post last week – “We will have the power of the gods” about Michio Kaku’s upcoming TV series generated a certain amount of heat amongst transhumanists and singularitarians unhappy about my criticism of radical futurism. There’s been a lot of heated discussion on the blog of Dale Carrico, the Berkeley rhetorician who coined the very useful phrase “superlative technology discourse” for this strand of thinking, and who has been subjecting its underpinning cultural assumptions to some sustained criticism, with some robust responses from the transhumanist camp.

Michael Anissimov, founder of the Immortality Institute, has made an extended reply to my post. Michael takes particular issue with my worry that these radical visions of the future are primarily championed by transhumanists who have a “strong, pre-existing attachment to a particular desired outcome”, stating that “transhumanism is not a preoccupation with a narrow range of specific technological outcomes. It looks at the entire picture of emerging technologies, including those already embraced by the mainstream. “

It’s good that Michael recognises the danger of the situation I identify, but some other comments on his blog suggest to me that what he is doing here is, in Carrico’s felicitous phrase, sanewashing the transhumanist and singularitarian movements with which he is associated. He urgently writes in the same post “If any transhumanists do have specific attachments to particular desired outcome, I suggest they drop them — now”, while an earlier post on his blog is entitled Emotional Investment. In that he asks the crucial question: “Should transhumanists be emotionally invested in particular technologies, such as molecular manufacturing, which could radically accelerate the transhumanist project? My answer: for fun, sure. When serious, no.” Michael is perceptive enough to realise the dangers here, but I’m not at all convinced that the same is true of many of his transhumanist fellow-travellers. The key point is that I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible (it’s worth quoting Don Eigler here again: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them”). The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking.

Meanwhile, Roko, on his blog Transhuman Goodness, expands on comments made to Soft Machines in his post “Raaa! Imagination is banned you foolish transhumanist”. He thinks, not wholly accurately, that what I am arguing against is any kind of futurism: “But I take issue with both Dale and Richard when they want to stop people from letting their imaginations run wild, and instead focus attention only onto things which will happen for certain (or almost for certain) and which will happen soon…. Transhumanists look over the horizon and – probably making many errors – try to discern what might be coming…. If we say that we see something like AGI or Advanced Nanotechnology over that horizon, don’t take it as a certainty… But at least take the idea as a serious possibility….”

Dale Carrico responded at length to this. I want to stress here just one point; my problem is not that I think that transhumanists have let their imaginations run wild. Precisely the opposite, in fact; I worry that transhumanists have just one fixed vision of the future, which is now beginning to show its age somewhat, and are demonstrating a failure of imagination in their inability to conceive of the many different futures that have the potential to unfold.

Anne Corwin, who was interviewed for the Kaku program, makes some very balanced comments that get us closer to the heart of the matter: “most sensible people, I think, realize that utopia and apocalypse are equally unrealistic propositions — but projecting forward our present-day dreams, wishes, hopes, and deep anxieties can still be a useful (and, dare I say, enjoyable) exercise. Just remember that there’s a lot we can do now to help improve things in the world — even in the absence of benevolent nanobot swarms.”

There are two key points here. Firstly, there’s the crucial insight that futurism is not, in fact, about the future at all – it’s about the present and the hopes and fears that people have about the direction society seems to be taking now. This is precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits. The second is that futurism is (or should be) an exercise, or in other words, a thought experiment. Alfred Nordmann reminds us (in If and Then: A Critique of Speculative NanoEthics) that both physics and philosophy have a long history of using improbable scenarios to illuminate deep problems. “Think of Descartes conjuring an evil demon who deceives us about our sense perceptions, think more recently of Thomas Nagel’s infamous brain in a vat.” So, for example, interrogating the thought experiment of a nanofactory that could reduce all matter to the status of software, might give us useful insights into the economics of a post-industrial world. But, as Nordmann says, “Philosophers take such scenarios seriously enough to generate insights from them and to discover values that might guide decisions regarding the future. But they do not take them seriously enough to believe them.”

Science journals take on poverty and human development

Science journals around the world are participating in a Global theme issue on poverty and human development; as part of this the Nature group journals are making all their contributions freely available on the web. Nature Nanotechnology is involved, and contributes three articles.

Nanotechnology and the challenge of clean water, by Thembela Hillie and Mbhuti Hlophe, gives a perspective from South Africa on this important theme. Also available is one of my own articles, this month’s opinion column, Thesis. I consider the arguments that are sometimes made that nanotechnology will lead to economic disruptions in developing countries that depend heavily on natural resources. Will, for example, the development of carbon nanotubes as electrical conductors impoverish countries like Zambia that depend on copper mining?

“We will have the power of the gods”

According to a story in the Daily Telegraph today, science has succeeded in its task of unlocking the secrets of matter, and now it’s simply a question of applying this knowledge to fulfill all our wants and dreams. The article is trailing a new BBC TV series fronted by Michio Kaku, who explains that “we are making the historic transition from the age of scientific discovery to the age of scientific mastery in which we will be able to manipulate and mould nature almost to our wishes.”

A series of quotes from “today’s pioneers” covers some painfully familiar ground: nanobot armies will punch holes in the blood vessels of enemy soliders, leading Nick Bostrom to opine that “In my view, the advanced form of nanotechnology is arguably the greatest existential risk humanity is likely to confront in this century.” Ray Kurzweil tells us that within 10 to 15 years we will be able to “reprogram biology away from cancer, away from heart disease, to really overcome the major diseases that kill us. “ Other headlines speak of “an end to aging”, “perfecting the human body” and taking “control over evolution”. At the end, though, it’s loss of control that we should worry about, having succeeded in creating superhuman artificial intelligence: Paul Saffo tells us “”There’s a good chance that the machines will be smarter than us. There are two scenarios. The optimistic one is that these new superhuman machines are very gentle and they treat us like pets. The pessimistic scenario is they’re not very gentle and they treat us like food.”

This all offers a textbook example of what Dale Carrico, a rhetoric professor at Berkeley, calls a superlative technology discourse. It starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combatting the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening. This process has been forcefully identified by Alfred Nordmann, a philosopher of science from TU Darmstadt, in his article “If and then: a critique of speculative nanoethics” (PDF). “If we can’t be sure that something is impossible, this is sufficient reason to take its possibility seriously. Instead of seeking better information and instead of focusing on the programs and presuppositions of ongoing technical developments, we are asked to consider the ethical and societal consequences of something that remains incredible.”

What’s wrong with this way of talking about technological futures is that it presents a future which is already determined; people can talk about the consequences of artificial general intelligence with superhuman capabilities, or a universal nano-assembler, but the future existence of these technologies is taken as inevitable. Naturally, this renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups – such as transhumanists and singularitarians – with a strong, pre-existing attachment to a particular desired outcome – an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking.

The difficulty that this situation leaves us in is made clear in another article by Alfred Nordmann – “Ignorance at the heart of science? Incredible narratives on Brain-Machine interfaces”. “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor. And, as Nordmann says: “The views of nay-sayers are not particularly interesting and members of a silent majority don’t have an incentive to invest time and energy just to “set the record straight.” The experts in the limelight of public presentations or media coverage tend to be enthusiasts of some kind or another and there are few tools to distinguish between credible and incredible claims especially when these are mixed up in haphazard ways.”

Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science – areas directly relevant to the technologies under discussion – in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.