Fantastic Voyage vs Das Boot

New Scientist magazine carries a nice article this week about the difficulties of propelling things on the micro- and nano- scales. The online version of the article, by Michelle Knott, is called Fantastic Voyage: travel in the nanoworld (subscription required); we’re asked to “prepare to dive into the nanoworld, where water turns to treacle and molecules the size of cannonballs hurl past from every direction.”

The article refers to our work demonstrating self-motile colloid particles, which I described earlier this year here – Nanoscale swimmers. Also mentioned is the work from Tom Mallouk and Ayusman Sen at Penn State; very recently this team demonstrated an artificial system that shows chemotaxis; that is, it swims in the direction of increasing fuel concentration, just as some bacteria can swim towards food.

The web version of the story has a title that, inevitably, refers back to the classic film Fantastic Voyage, with its archetypal nanobot and magnificent period special effects, in which the nanoscale environment inside a blood vessel looks uncannily like the inside of a lava lamp. The title of the print version, though, Das (nano) Boot, references instead Wolfgang Peterson’s magnificently gloomy and claustrophobic film about a German submarine crew in the second world war – as Knott concludes, riding in nanoscale submarines is going to be a bumpy business.

Home again

I’m back from my week in Ireland, regretting as always that there wasn’t more time to look around. After my visit to Galway, I spend Wednesday in Cork, visiting the Tyndall National Institute and the University, where I gave a talk in the Physics Department. Thursday I spent at the Intel Ireland site at Leixlip, near Dublin; this is the largest Intel manufacturing site outside the USA, but I didn’t see very much of it apart from getting an impression of its massive scale, as I spent the day talking about some rather detailed technical issues. On Friday I was in the Physics department of Trinity College, Dublin.

Ireland combines being one of the richest countries in the world (with a GDP per person higher than both the USA and the UK) with a recent sustained high rate of economic growth. Up until relatively recently, though, it has not spent much on scientific research. That’s changed in the last few years; the Government agency Science Foundation Ireland, has been investing heavily. This investment has been carried out in a very focused way, concentrating on biotechnology and information technology. The evidence for this investment was very obvious in the places I visited, both in terms of facilities and equipment and in people, with whole teams being brought in in important areas like photonics. The aim is clearly to emulate the success of the other small, rich countries of Europe, like Finland, Sweden, the Netherlands and Switzerland, whose contributions to science and technology are well out of proportion to their size

Not that there’s a lack of scientific tradition in Ireland, though – the lecture theatre I spoke in Trinity College was the same one in which Schrödinger delivered his famous series of lectures What is life?”, and as a keepsake I was given a reprint of the lectures at Trinity given by Richard Helsham and published in 1739, which constitute one of the first textbook presentations of the new Newtonian natural philosophy. My thanks go to the Institute of Physics Ireland, and my local hosts Ray Butler, Sile Nic Chormaic and Cormac McGuinness.

Super-vision

I’m in Ireland for the week, at the invitation of the Institute of Physics Ireland, giving talks at a few universities here. My first stop was at the National University of Ireland, Galway. In addition to the pleasure of spending a bit of time in this very attractive country, it’s always interesting to get a chance to learn what people are doing in the departments one visits. The physics department at Galway is small, but it’s received a lot of investment recently; the Irish government has recently started spending some quite substantial sums on research, recognising the importance of technology to its currently booming economy.

One of the groups at Galway, run by Chris Dainty, does applied optics, and one of the projects I was shown was about using adaptive optics to correct the shortcomings of the human eye. Adaptive optics was originally developed for astronomy (and some defense applications as well) – the idea is to correct for a rapidly changing distortion of an image on the fly, using a mirror whose shape can be changed. Although the implementations of adaptive optics are very sophisticated and very expensive, we’re starting to see much cheaper implementations of the principle. For example, some DVD players now have an adaptive optics element to correct for DVDs that don’t quite meet specifications. One idea that has excited a number of people is the hope that one might be able to use adaptive optics to achieve better than perfect vision; after all, the eye, considered as an optical system is very far from perfect, and even after one has corrected the simple failings of focus and astigmatism with glasses there are many higher order aberrations due to the eye’s lens being very far from the perfect shape. The Galway group does indeed have a system that can correct these aberrations, but the lesson from this work isn’t entirely what might first expect.

What the work shows is that adaptive optics can indeed make a significant improvement to vision, but only in those conditions in which the pupil is dilated. As photographers know, distortions due to imperfections in a lens are most apparent at large apertures, and stopping down the aperture always has the effect of forgiving the lens’s shortcomings. In the case of the eye, in normal, daytime conditions the pupil is rather narrow, so it turns out that adaptive optics only helps if the pupil is dilated, as would happen under the influence of some drugs. Of course, at night, the pupil is open wide to let as much light as possible. So, does adaptive optics help you get supervision in dark conditions? Actually, it turns out that it doesn’t – in the dark, you form the image with the more sensitive rod cells, rather than the cones that work in brighter light. The rods are more widely spaced, so it turns out that effectively the sharpness of the image you see at night isn’t limited by the shortcomings of the lens, but by the effective pixel size of the detector. So, it seems that super-vision through adaptive optics is likely to be somewhat less useful than it first appeared.

Nanotechnology and the developing world

On Wednesday, I spent the day in London, at the headquarters of the think-tank Demos, who were running a workshop on applications of nanotechnology in the developing world. Present were other nano-scientists, people from development NGOs like Practical Action and WaterAid, and industry representatives. I was the last speaker, so I was able to reflect some of the comments from the day’s discussion in my own talk. This, more or less, is what I said:

When people talk about nanotechnology and the developing world, what we generally hear is one of two contrasting views – “nanotechnology can save the developing world” or “nanotechnology will make rich/poor gap worse”. We need to move beyond this crude counterpoint.

The areas in which nanotechnology has the potential to help the developing world are now fairly well rehearsed. Here’s a typical list –
• Cheap solar power
• Solutions for clean water
• Inexpensive diagnostics
• Drug release
• Active ingredient release – pesticides for control of disease vectors

What these have in common is that in each case you could see in principle that they might make a difference, but it isn’t obvious that they will. Not least of the reasons for this uncertainty is because we know that many existing technological solutions to obvious and pressing problems, many much more simple and widely available than these promised nanotechnology solutions, haven’t been implemented yet. This is not to say that we don’t need new technology – clearly, on a global scale, we very much do. Throughout the world we are existentially dependent on technology, but the technology we have is not sustainable and must be superceded. Arguably, though, this is more a problem for rich countries.

Amongst the obvious barriers, there is profound ignorance in the scientific/technical communities of the real problems of the developing world, and of the practical realities that can make it hard to implement technological solutions. This was very eloquently expressed by Mark Welland, the director of the Cambridge Nanoscience Centre, who has recently been spending a lot of time working with communities and scientists in Egypt and other middle eastern countries. There are fundamental difficulties in implementing solutions in a market-driven environment. Currently we rely on the market – perhaps with some intervention, by governments, NGOs or foundations, of greater or lesser efficacy – to take developments from the lab into useful products. To put it bluntly, there is a problem in designing a business model for a product whose market consists of people who haven’t got much money, and one of the industry representatives described a technically excellent product whose implementation has been stranded for just this reason.

Ways of getting round this problem include the kind of subsidies and direct market interventions now being tried for the distribution of the new (and expensive) artemisinin-based combination therapies for malaria (see this article in the Economist). The alternative is to put one’s trust in the process of trickle-down innovation, as Jeremy Baumberg called it; this is the hope that technologies developed for rich-country problems might find applications in the developing world. For example, controlled pesticide release technologies marketed to protect Florida homes from termites might find applications in controlling mosquitos, or water purification technology developed for the US military might be transferred to poor communities in arid areas.

Another challenge is the level of locally available knowlege and capacity to exploit technology in developing countries. One must ensure that technology is robust, scalable and can be maintained with local resources. Mark Welland reminds us that generating local solutions with local manpower, aside from its other benefits, helps build educational capacity in those countries.

On the negative side of the ledger, people point to problems like:
• The further lock-down of innovation through aggressive intellectual property regimes,
• The possibility of environmental degradation due to dumping of toxic nanoparticles
• Problems for developing countries depending on commodities from commodity substitution as a result of new technologies.

These are all issues worth considering, but they aren’t really specific to nanotechnology, but are more general consequences of the way new technology is developed and applied. It’s worth making a few more general comments about the cultures of science and technology.

It needs to be stressed first that science is a global enterprise, and it is a trans-national culture that is not very susceptible to central steering. We’re in an interesting time now, with the growth of new science powers: China and India have received the most headlines, but shouldn’t neglect other countries like Brazil and South Africa that are consciously emphasising nanotechnology as they develop their science base. Will these countries focus their science efforts on the needs of industrialisation and their own growing middle classes, or does their experience put them in a better position to propose realistic solutions to development problems? Meanwhile, in more developed countries like the UK, it is hard to overstate the emphasis the current political climate puts on getting science to market. The old idea of pure science leading naturally to applied science that then feeding into wealth-creating technology – the “linear model” – is out of favour both politically and intellectually, and we see an environment in which the idea of “goal-oriented” science is exalted. In the UK this has been construed in a very market focused way – how can we generate wealth by generating new products? “Users” of research – primarily industry, with some representation from government departments, particularly those in the health and defense sectors, have an increasingly influential voice in setting science policy. One could ask, who represents the potential “users” of research in the developing world?

One positive message is that there is a lot of idealism amongst scientists, young and old, and this idealism is often a major driving force for people taking up a scientific career. The current climate, in which the role of science in underpinning wealth creation is emphasised above all else, isn’t necessarily very compatible with idealism. There is a case for more emphasis on the technology that delivers what people need, as well as what the market wants. In practical terms, many scientists might wish to spend time on work that benefits the developing world, but career pressures and institutional structures make this difficult. So how can we harness the idealism that motivates many scientists, while tempering it with realism about the institutional structures that they live in and understanding the special characteristics that make scientists good at their job?

Less than Moore?

Some years ago, the once-admired BBC science documentary slot Horizon ran a program on nanotechnology. This was preposterous in many ways, but one sequence stands out in my mind. Michio Kaku appeared in front of scenes of rioting and mayhem, opining that “the end of Moore’s Law is perhaps the single greatest economic threat to modern society, and unless we deal with it we could be facing economic ruin.” Moore’s law, of course, is the observation, or rather the self-fulfilling prophecy, that the number of transistors on an integrated circuit doubles about every two years, with corresponding exponential growth in computing power.

As Gordon Moore himself observes in a presentation linked from the Intel site, “No Exponential is Forever … but We can Delay Forever (2 MB PDF). Many people have prematurely written off the semiconductor industry’s ability to maintain, over forty years, a record of delivering a nearly constant, year on year, percentage shrinking in circuits and increase in computing power. Nonetheless, there will be limits to how far the current CMOS-based technology can be pushed. These limits could arise from fundamental constraints of physics or materials science, or from engineering problems like the difficulties of managing the increasingly problematic heat output of densely packed components, or simply from the economic difficulties of finding business models that can make money in the face of the exponentially increasing cost of plant. The question, then, is not if Moore’s law, for conventional CMOS devices, will run out, but when.

What has underpinned Moore’s law is the International Technology Roadmap for Semiconductors, a document which effectively choreographs the research and development required to deliver the continual incremental improvements on our current technology that are needed to keep Moore’s law on track. It’s a document that outlines the requirements for an increasingly demanding series of linked technological breakthroughs as time marches on; somewhere between 2015 and 2020 a crunch comes, with many problems for which solutions look very elusive. Beyond this time, then, there are three possible outcomes. It could be that these problems, intractable though they look now, will indeed be solved, and Moore’s law will continue through further incremental developments. The history of the semiconductor industry tells us that this possibility should not be lightly dismissed; Moore’s law has already been written off a number of times, only for the creativity and ingenuity of engineers and scientists to overcome what seemed like insuperable problems. The second possibility is that a fundamentally new architecture, quite different from CMOS, will be developed, giving Moore’s law a new lease of life, or even permitting a new jump in computer power. This, of course, is the motivation for a number of fields of nanotechnology. Perhaps spintronics, quantum computing, molecular electronics, or new carbon-based electronics using graphene or nanotubes will be developed to the point of commercialisation in time to save Moore’s law. For the first time, the most recent version of the semiconductor roadmap did raise this possibility, so it deserves to be taken seriously. There is much interesting physics coming out of laboratories around the world in this area. But none of these developments are very close to making it out of the lab into a process or a product, so we need to at least consider the possibility that it won’t arrive in time to save Moore’s law. So what happens if, for the sake of argument, Moore’s law peters out in about ten years time, leaving us with computers perhaps one hundred times more powerful than the ones we have now that take more than a few years to become obsolete. Will our economies collapse and our streets fill with rioters?

It seems unlikely. Undoubtedly, innovation is a major driver of economic growth, and the relentless pace of innovation in the semiconductor industry has contibuted greatly to the growth we’ve seen in the last twenty years. But it’s a mistake to suppose that innovation is synonymous with invention; new ways of using existing inventions can be as great a source of innovation as new inventions themselves. We shouldn’t expect that a period of relatively slow innovation in hardware would mean that there would be no developments in software; on the contrary, as raw computing power gets less superabundant we’d expect ingenuity in making the most of available power to be greatly rewarded. The economics of the industry would change dramatically, of course. As the development cycle lengthened the time needed to amortise the huge capital cost of plant would stretch out and the business would become increasingly commoditised. Even as the performance of chips plateaued, their cost would drop, possibly quite precipitously; these would be the circumstances in which ubiquitous computing truly would take off.

For an analogy, one might want to look a century earlier. Vaclav Smil has argued, in his two-volume history of technology of the late nineteenth and twentieth century (Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact and Transforming the Twentieth Century: Technical Innovations and Their Consequences ), that we should view the period 1867 – 1914 as a great technological saltation. Most of the significant inventions that underlay the technological achievements of the twentieth century – for example, electricity, the internal combustion engine, and powered flight – were made in this short period, with the rest of the twentieth century being dominated by the refinement and expansion of these inventions. Perhaps we will, in the future, look back on the period 1967 – 2014, in a similar way, as a huge spurt of invention in information and communication technology, followed by a long period in which the reach of these inventions continued to spread throughout the economy. Of course, this relatively benign scenario depends on our continued access to those things on which our industrial economy is truly existentially dependent – sources of cheap energy. Without that, we truly will see economic ruin.

The uses and abuses of speculative futurism

My post last week – “We will have the power of the gods” about Michio Kaku’s upcoming TV series generated a certain amount of heat amongst transhumanists and singularitarians unhappy about my criticism of radical futurism. There’s been a lot of heated discussion on the blog of Dale Carrico, the Berkeley rhetorician who coined the very useful phrase “superlative technology discourse” for this strand of thinking, and who has been subjecting its underpinning cultural assumptions to some sustained criticism, with some robust responses from the transhumanist camp.

Michael Anissimov, founder of the Immortality Institute, has made an extended reply to my post. Michael takes particular issue with my worry that these radical visions of the future are primarily championed by transhumanists who have a “strong, pre-existing attachment to a particular desired outcome”, stating that “transhumanism is not a preoccupation with a narrow range of specific technological outcomes. It looks at the entire picture of emerging technologies, including those already embraced by the mainstream. “

It’s good that Michael recognises the danger of the situation I identify, but some other comments on his blog suggest to me that what he is doing here is, in Carrico’s felicitous phrase, sanewashing the transhumanist and singularitarian movements with which he is associated. He urgently writes in the same post “If any transhumanists do have specific attachments to particular desired outcome, I suggest they drop them — now”, while an earlier post on his blog is entitled Emotional Investment. In that he asks the crucial question: “Should transhumanists be emotionally invested in particular technologies, such as molecular manufacturing, which could radically accelerate the transhumanist project? My answer: for fun, sure. When serious, no.” Michael is perceptive enough to realise the dangers here, but I’m not at all convinced that the same is true of many of his transhumanist fellow-travellers. The key point is that I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible (it’s worth quoting Don Eigler here again: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them”). The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking.

Meanwhile, Roko, on his blog Transhuman Goodness, expands on comments made to Soft Machines in his post “Raaa! Imagination is banned you foolish transhumanist”. He thinks, not wholly accurately, that what I am arguing against is any kind of futurism: “But I take issue with both Dale and Richard when they want to stop people from letting their imaginations run wild, and instead focus attention only onto things which will happen for certain (or almost for certain) and which will happen soon…. Transhumanists look over the horizon and – probably making many errors – try to discern what might be coming…. If we say that we see something like AGI or Advanced Nanotechnology over that horizon, don’t take it as a certainty… But at least take the idea as a serious possibility….”

Dale Carrico responded at length to this. I want to stress here just one point; my problem is not that I think that transhumanists have let their imaginations run wild. Precisely the opposite, in fact; I worry that transhumanists have just one fixed vision of the future, which is now beginning to show its age somewhat, and are demonstrating a failure of imagination in their inability to conceive of the many different futures that have the potential to unfold.

Anne Corwin, who was interviewed for the Kaku program, makes some very balanced comments that get us closer to the heart of the matter: “most sensible people, I think, realize that utopia and apocalypse are equally unrealistic propositions — but projecting forward our present-day dreams, wishes, hopes, and deep anxieties can still be a useful (and, dare I say, enjoyable) exercise. Just remember that there’s a lot we can do now to help improve things in the world — even in the absence of benevolent nanobot swarms.”

There are two key points here. Firstly, there’s the crucial insight that futurism is not, in fact, about the future at all – it’s about the present and the hopes and fears that people have about the direction society seems to be taking now. This is precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits. The second is that futurism is (or should be) an exercise, or in other words, a thought experiment. Alfred Nordmann reminds us (in If and Then: A Critique of Speculative NanoEthics) that both physics and philosophy have a long history of using improbable scenarios to illuminate deep problems. “Think of Descartes conjuring an evil demon who deceives us about our sense perceptions, think more recently of Thomas Nagel’s infamous brain in a vat.” So, for example, interrogating the thought experiment of a nanofactory that could reduce all matter to the status of software, might give us useful insights into the economics of a post-industrial world. But, as Nordmann says, “Philosophers take such scenarios seriously enough to generate insights from them and to discover values that might guide decisions regarding the future. But they do not take them seriously enough to believe them.”

Science journals take on poverty and human development

Science journals around the world are participating in a Global theme issue on poverty and human development; as part of this the Nature group journals are making all their contributions freely available on the web. Nature Nanotechnology is involved, and contributes three articles.

Nanotechnology and the challenge of clean water, by Thembela Hillie and Mbhuti Hlophe, gives a perspective from South Africa on this important theme. Also available is one of my own articles, this month’s opinion column, Thesis. I consider the arguments that are sometimes made that nanotechnology will lead to economic disruptions in developing countries that depend heavily on natural resources. Will, for example, the development of carbon nanotubes as electrical conductors impoverish countries like Zambia that depend on copper mining?

“We will have the power of the gods”

According to a story in the Daily Telegraph today, science has succeeded in its task of unlocking the secrets of matter, and now it’s simply a question of applying this knowledge to fulfill all our wants and dreams. The article is trailing a new BBC TV series fronted by Michio Kaku, who explains that “we are making the historic transition from the age of scientific discovery to the age of scientific mastery in which we will be able to manipulate and mould nature almost to our wishes.”

A series of quotes from “today’s pioneers” covers some painfully familiar ground: nanobot armies will punch holes in the blood vessels of enemy soliders, leading Nick Bostrom to opine that “In my view, the advanced form of nanotechnology is arguably the greatest existential risk humanity is likely to confront in this century.” Ray Kurzweil tells us that within 10 to 15 years we will be able to “reprogram biology away from cancer, away from heart disease, to really overcome the major diseases that kill us. “ Other headlines speak of “an end to aging”, “perfecting the human body” and taking “control over evolution”. At the end, though, it’s loss of control that we should worry about, having succeeded in creating superhuman artificial intelligence: Paul Saffo tells us “”There’s a good chance that the machines will be smarter than us. There are two scenarios. The optimistic one is that these new superhuman machines are very gentle and they treat us like pets. The pessimistic scenario is they’re not very gentle and they treat us like food.”

This all offers a textbook example of what Dale Carrico, a rhetoric professor at Berkeley, calls a superlative technology discourse. It starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combatting the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening. This process has been forcefully identified by Alfred Nordmann, a philosopher of science from TU Darmstadt, in his article “If and then: a critique of speculative nanoethics” (PDF). “If we can’t be sure that something is impossible, this is sufficient reason to take its possibility seriously. Instead of seeking better information and instead of focusing on the programs and presuppositions of ongoing technical developments, we are asked to consider the ethical and societal consequences of something that remains incredible.”

What’s wrong with this way of talking about technological futures is that it presents a future which is already determined; people can talk about the consequences of artificial general intelligence with superhuman capabilities, or a universal nano-assembler, but the future existence of these technologies is taken as inevitable. Naturally, this renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups – such as transhumanists and singularitarians – with a strong, pre-existing attachment to a particular desired outcome – an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking.

The difficulty that this situation leaves us in is made clear in another article by Alfred Nordmann – “Ignorance at the heart of science? Incredible narratives on Brain-Machine interfaces”. “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor. And, as Nordmann says: “The views of nay-sayers are not particularly interesting and members of a silent majority don’t have an incentive to invest time and energy just to “set the record straight.” The experts in the limelight of public presentations or media coverage tend to be enthusiasts of some kind or another and there are few tools to distinguish between credible and incredible claims especially when these are mixed up in haphazard ways.”

Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science – areas directly relevant to the technologies under discussion – in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.

Graphene and the foundations of physics

Graphite, familiar from pencil leads, is a form of carbon consisting of stacks of sheets, each of which consists of a hexagonal mesh of atoms. The sheets are held together only weakly; this is why graphite is such a good lubricant, and when you run a pencil across a piece of paper the mark is made from rubbed off sheets. In 2004, Andre Geim, from the University of Manchester, made the astonishing discovery that you could obtain large, near-perfect sheets of graphite only one atom thick, simply by rubbing graphite against a single crystal silicon substrate – these sheets are called graphene. What was even more amazing was the electronic properties of these sheets – they conduct electricity, and the electrons move through the material at great speed and with very few collisions. There’s been a gold-rush of experiments since 2004, uncovering the remarkable physics of this material. All this has been reviewed in a recent article by Geim and Novosolev (Nature Materials, 6 p 183, 2007) – The rise of graphene (It’s worth taking a look at Geim’s group website, which contains many downloadable papers and articles – Geim is a remarkably creative, original and versatile scientist; besides his discoveries in the graphene field, he’s done very significant work in optical metamaterials and gecko-like nanostructured adhesives, besides his notorious frog-levitation exploits). From the technological point of view, the very high electron mobility of graphene and the possibility of shrinking the dimensions of graphene based devices right down to atomic dimensions make it very attractive as a candidate for electronics when the further miniaturisation of silicon based devices stalls.

At the root of much of the strange physics of graphene is the fact that electrons behave in it like highly relativistic, massless particles. This arises from the way the electrons interact with the regular, 2-dimensional lattice of carbon atoms. Normally when an electron (which we need to think of as a wave, according to quantum mechanics) moves through a lattice of ions, the effect of the way the wave is scattered from the ions and the scattered waves interfere with each other is that the electron behaves as it has a different mass to its real, free space value. But in graphene the effective mass is zero (the energy is simply proportional to the wave-vector, like a photon, rather than being proportional to the wave-vector squared, as would be the case for a normal non-relativistic particle with mass).

The weird way in which electrons in graphene mimic ultra-relativistic particles allows one to test predictions of quantum field theory that would be inaccessible to experiments using fundamental particles. Geim writes about this in this week’s Nature, under the provocative title Could string theory be testable? (subscription needed). Graphene is an example where, from the complexity of the interactions between electrons and a 2-d lattice of ions, simple behaviour emerges, that seems to be well described by the theories of fundamental high energy physics. Geim asks “could we design condensed-matter systems to test the supposedly non-testable predictions of string theory too?” The other question to ask, though, is whether what we think of as the fundamental laws of physics, such as quantum field theory, themselves emerge from some complex inner structure that remains inaccessible to us.

Quaint folk notions of nanotechnologists

Most of us get through our lives with the help of folk theories – generalisations about the world that may have some grounding in experience, but which are not systematically checked in the way that scientific theories might be. These theories can be widely shared amongst a group with common interests, and they both serve as lenses through which to view and interpret the world, and guides to action. Nanotechnologists aren’t exempt from the grip of such folk theories, and Arie Rip, from the University of Twente, one of the leading lights in European science studies, has recently published an analysis of these – Folk theories of nanotechnologists(PDF) , (Science as Culture 15 p349 (2006)).

He identifies three clusters of folk theories. The first is the idea that new technologies inevitably follow a “wow-to-yuck” trajectory, in which initial public enthusiasm for the technology is followed by a backlash. The exemplar of this phenomenon is the reaction to genetically modified organisms, which, it is suggested, followed exactly this pattern, with widespread acceptance in the ’70s, then a backlash in 80’s and 90’s. Rip suggests that this doesn’t at all represent the real story of GMOs, and questions the fundamental characterisation of the public as essentially fickle.

Another folk theory of nanotechnology implies a similar narrative of initial enthusiasm followed by subsequent disillusionment; this is the “cycle of hype” idea popularised by the Gartner group. The idea is that all new technologies are initially accompanied by a flurry of publicity and unrealistic expectations, leading to a “peak of inflated expectations”. This is inevitably followed by disappointment and loss of public interest; the technology then falls into a “trough of disillusionment”. Only then does the technology start to deliver, with a “slope of enlightenment” leading to a “plateau of productivity”, in which the technology does deliver real benefits, albeit less dramatic than those initially promised in the first stage of the cycle. Rip regards this as a plausible storyline masquerading as an empirical finding. But the key issue he identifies at the core of this is the degree to which it is regarded as acceptable – or even necessary – to exaggerate claims about the impact of a technology. In Rip’s view, we have seen a divergence in strategies between the USA and Europe, with advocates of nanotechology in Europe making much more modest claims (and thus perhaps positioning themselves better for the aftermath of a bubble bursting).

Rip’s final folk theory concerns how nanotechnologists view the public. In his view, nanotechnologists are excessively concerned about public concern, projecting onto the public a fear of the technology out of proportion to what empirical findings actually measure. Of course, this is connected to the folk theory about GMOs implicit in the “wow-to-yuck” theory. The most telling example Rip offers is the widespread fear amongst nanotechnology insiders that a film of Michael Crichton’s thriller “Prey” would lead to a major backlash. Rip diagnoses a widespread outbreak of nanophobia-phobia.