The Nottingham nanotechnology and nanoscience centre

Today saw the official opening of the Nottingham nanotechnology and nanoscience centre, which brings together some existing strong research areas across the University. I’ve made the short journey down the motorway from Sheffield to listen to a very high quality program of talks, with Sir Harry Kroto, co-discoverer of buckminster fullerene, taking the top of the bill. Also speaking were Don Eigler, from IBM (the originator of perhaps the most iconic image in all nanotechnology, the IBM logo made from individual atoms) Colin Humphreys, from the University of Cambridge, and Sir Fraser Stoddart, from UCLA.

There were some common themes in the first two talks (common, also, with Wade Adams’s talk in Norway described below). Both talked about the great problems of the world, and looked to nanotechnology to solve them. For Colin Humphries, the solutions to problems of sustainable energy and clean water are to be found in the material gallium nitride, or precisely in the compounds of aluminium, indium and gallium nitride which allow one to make, not just blue light emitting diodes, but LEDs that can emit light of any wavelength between the infra-red and the deep ultra-violet. Gallium nitride based blue LEDs were invented as recently as 1996 by Shuji Nakamura, but this is already a $4 billion market, and everyone will be familiar with torches and bicycle lights using them.

How can this help the problem of access to clean drinking water? We should remind ourselves that 10% of world child mortality is directly related to poor water quality, and half the hospital beds in the world occupied by people with water related diseases. One solution would be to use deep ultraviolet to sterilise contaminated water. Deep UV works well for sterilisation because biological organisms never developed a tolerance to these waves, which don’t penetrate the atmosphere. UV at a wavelength of 270 nm does the job well, but existing lamps are not practical because they need high voltages and are not efficient, and also some use mercury. AlGaN LEDS work well, and in principle they could be powered by solar cells at 4 V, which might allow every household to sterilise its water supply easily and cheaply. The problem is efficiency is too low for flowing water. At blue wavelengths (400 nm) efficiency is very good at 70%, but it drops precipitously at smaller wavelengths, and this is not yet understood theoretically.

The contribution of solid state lighting to the energy crisis arises from the efficiency of LEDs compared to tungsten light bulbs. People often underestimate the amount of energy used in lighting domestic and commercial buildings. Globally, it accounts for 1,900 megatonnes of CO2; this is 70% of the total emissions from cars, and three times the amount due to aviation. In the UK, it amounts to 20% of electricity generated, and in Thailand, for example, it is even more, at 40%. But tungsten light bulbs, which account for 79% of sales, have an efficiency of only 5%. There is much talk now of banning tungsten light bulbs, but the replacement, fluorescent lights, is not perfect either. Compact fluorescents have an efficiency of 15%, which is an improvement, but what is less well appreciated is that each bulb contains 4 mg of mercury. This would lead to tonnes of mercury ending up in landfills if tungsten bulbs were replaced by compact fluorescents.

Could solid-state lighting do the job? Currently what you can buy are blue LEDs (made from InGaN) which excite a yellow phosphor. The colour balance of these leaves something to be desired, and soon we will see blue or UV LEDs exciting red/green/blue phosphors which will have a much better colour balance (you could also use a combination of red, green and blue LEDs, but currently green efficiencies are too low). The best efficiency in a commercial white LED is 30% (from Seoul Semiconductor), but the best in the lab (Nichia) is currently 50%. The target is an efficiency of 50-80% at high drive currents, which puts them at a higher efficiency than the current most efficient light, sodium lamps, whose familiar orange glow converts electricity at 45% efficiency. This target would make them 10 times more efficient than filaments, 3 times more efficient than compact fluorescents and with no mercury. In the US the 50% replacement of filaments would save 41 GW, in the UK 100% replacement would save 8 GW of power station capacity. The problem at the moment is cost, but the rapidity of progress in this area means that Humphries is confident that within a few years costs will fall dramatically.

Don Eigler also talked about societal challenges, but with a somewhat different emphasis. His talk was entitled “Nanotechnology: the challenge of a new frontier”. The questions he asked were “What challenges do we face as a society in dealing with this new frontier of nanotechnology, and wow should we as a society make decisions about a new technology like nanotechnology?”

There are three types of nanotechnology, he said: evolutionary nanotechnology (historically larger technologies that have been shrunk to nanoscale dimensions), revolutionary nanotechnology (entirely new nanometer-scale technologies) and natural nanotechnology (cell biology, offering inspirations for our own technologies). Evolutionary nanotechnologies include semiconductors, nanoparticles in cosmetics. Revolutionary nanotechnologies include carbon nanotubes, for potential new logic structures that might supplant silicon, and the IBM millipede data storage system. Natural nanotechnologies include bacterial flagellar motors.

Nanohysteria comes into different varieties too. Type 1 nanohysteria is represented by greed driven “irrational exuberance”, and is based on the idea that nanotechnology will change everything very soon, as touted by investment tipsters and consultants who want to take people’s money off them. What’s wrong with this is the absence of critical thought. Type 2 nanohysteria is the opposite – fear driven irrational paranoia exemplified by the grey goo scenario of out of control self-replicating molecular assemblers or nanobots. What’s wrong with this is again, the absence of critical thought. Prediction is difficult, but Eigler thinks that self-replicating nanobots are not going to happen any time soon, if ever.

What else do people fear about nanotechnology? Eigler recently met a young person with strong views, that nanotech is scary, it will harm the biosphere, it will create new weapons, it is being driven by greedy individuals and corporations, in summary it is not just wrong, it is evil. Where did these ideas come from? If you look on the web – you see talk of superweapons made from molecular assemblers. What you don’t find on the web are statements like “My grandmother is still alive today because nanotechnology saved her life”. Why is this? Nanotechnology has not yet provided a tangible benefit to grandmothers!

Some candidates include gold nanoshell cancer therapy, as developed by Naomi Halas at Rice. This particular therapy may not work in humans, but something similar will. Another example is the work of Sam Stupp at Northwestern, making nanofibers that cause neural progenitor cells turn into new neurons, not scar tissue, holding out the hope of regenerative medicine to repair spinal cord damage.

As an example of wrong conclusions, Eigler made the smallest logic circuit, 12nm by 17 nm, made from carbon monoxide. But carbon monoxide is a deadly poison – shouldn’t we worry about this? Let’s do the sum – 18 CO molecules are needed for one transistor. The context is that I breathe 2 billion trillion molecules a day, so every day I breathe enough to make 160 million computers.

What could the green side of nanotechnology be? We could have better materials, that are lighter, stronger and more easily recyclable, and this will reduce energy consumption. Perhaps we can use nanotechnology to reduce consumption of natural resources and helping recycling. We can’t prove yet that these good benefits will follow, but Eigler believes they are likely.

There is a real risk of nanotechnology, if it is used without evaluating the consequences. The widespread introduction of nanoparticulates into the environment would be an example of this. So how do we now if something is safe? We need to think it through, but we can’t guarantee absolutely that anything can be absolutely safe. The principles should be that we eliminate fantasies, understand the different motivations that people have, and honestly assess risk and benefit. We need informed discussion, that is critical, creative, inclusive and respectful. We need to speak with knowledge and respect, and listen with zeal. Scientists have not always been good at this and we need to get much better. Our best weapons are our traditions of rigorous honesty and our tolerance for diverse beliefs.

A new strategy for UK Nanotechnology

It was announced this morning that the Engineering and Physical Sciences Research Council, the lead government agency for funding nanotechnology in the UK, has appointed a new Senior Strategic Advisor for Nanotechnology. This forms part of a new strategy, published (in a distinctly low key way) earlier this year. The strategy announces some relatively modest increases in funding from the current level, which amounts to around £92 million per year, much of which will be focused on some large-scale “Grand Challenge” projects addressing areas of major societal need.

An editorial (subscription required) in February’s issue of Nature Nanotechnology lays out the challenges that will face the new appointee. By a number of measures, the UK is underperforming in nanotechnology relative to its position in world science as a whole. Given the relatively small sums on offer, focusing on areas of existing UK strength – both academically and in existing industry – is going to be essential, and it’s clear that the pharmaceutical and health-care sectors are strong candidates. Nature Nanotechnology’s advice is clear: “Indeed, getting the biomedical community— including companies — to buy into a national strategy for nanotechnology and health care should be a top priority for the nano champion.”

Optimism and pessimism in Norway

I’m in Bergen, Norway, at a conference, Nanomat 2007, run by the Norwegian Research Council. The opening pair of talks – from Wade Adams, of Rice University and Jürgen Altmann, from Bochum, presented an interesting contrast of nano-optimism and nano-pessimism. Here are my notes on the two talks, hopefully more or less reflecting what was said without too much editorial alteration.

The first talk was from Wade Adams, the director of Rice University’s Richard E. Smalley Institute, with the late Richard Smalley’s message “Nanotechnology and Energy: Be a scientist and save the world”. Adams gave the historical background to Smalley’s interest in energy, which began with a talk from a Texan oilman explaining how rapidly oil and gas were likely to run out. Thinking positively, if one has cheap, clean energy most of the problems of the world – lack of clean water, food supply, the environment, even poverty and war – are soluble. This was the motivation for Smalley’s focus on clean energy as the top priority for a technological solution. It’s interesting that climate change and greenhouse gases was not a primary motivation for him; on the other hand he was strongly influenced by Hubbert (see http://www.princeton.edu/hubbert) and his theory of peak oil. Of course, the peak oil theory is controversial (recent a article in Nature – That’s oil, folks, subscription needed – for an overview of the arguments), but whether oil production has already peaked, as the doomsters suggest, or the peak is postponed to 2030, it’s a problem we will face at sometime or other. On the pessimistic side, Adams cited another writer – Mat Simmons – who maintains that oil production in Saudi Arabia – usually considered the reserve of last resort – has already peaked.

Meanwhile on the demand side, we are looking at increasing pressure. Currently 2 billion people have no electricity, 2 billion people rely on biomass for heating and cooking, the world’s population is still increasing and large countries such as India and China are industrialising fast. One should also remember that oil has more valuable uses than simply to be burnt – it’s the vital feedstock for plastics and all kinds of other petrochemicals.

Summarising the figures, the world (in 2003) consumed energy at a rate of 14 terawatts, the majority in the form of oil. By 2050, we’ll need between 30 and 60 terawatts. This can only happen if there is a dramatic change – for example renewable energy stepping up to deliver serious (i.e. measured in terawatts) amounts of power. How can this happen?

The first place to look is probably efficiencies. In the United States, about 60% of energy is currently simply wasted, so simple measures such as using low energy light bulbs and having more fuel-efficient cars can take us a long way.

On the supply side, we need to be hard-headed about evaluating the claims of various technologies in the light of the quantities needed. Wind is probably good for a couple of terawatts at most, and capacity constraints limit the contribution nuclear can make. To get 10 terawatts of nuclear by 2050 we need roughly 10,000 new plants – that’s one built every two days for the next 40 years, which in view of the recent record of nuclear build seems implausible. The reactors would in any case need to be breeders to avoid the consequent uranium shortage. The current emphasis on the hydrogen economy is a red herring, as it is not a primary fuel.

The only remaining solution is solar power. 165,000 TW hits the earth in sunlight. The problem is that the sunlight doesn’t arrive in the right places. Smalley’s solution was a new energy grid system, in which energy is transmitted through wires rather than in tankers. To realise this you need better electrical conductors (either carbon nanotubes or superconductors), and electrical energy storage devices. Of course, Rice University is keen on the nanotube solution. The need is to synthesise large amounts of carbon nanotubes which are all of the same structure, the structure that has metallic properties rather than semiconducting ones. Rice had been awarded $16 million from NASA to develop the scale-up of their process for growing metallic nanotubes by seeded growth, but this grant was cancelled amidst the recent redirection of NASA’s priorities.

Ultimately, Adams was optimistic. In his view, technology will find a solution and it’s more important now to do the politics, get the infrastructure right, and above all to enthuse young people with a sense of mission to become scientists and save the world. His slides can be downloaded here (8.4 MB PDF file).

The second, much more pessimistic, talk was from Jürgen Altmann, a disarmament specialist from Ruhr-Universität Bochum. His title was “Nanotechnology and (International) Society: how to handle the new powerful technologies?” Altmann is a physicist by original training, and is the author of a book, Military nanotechnology: new technology and arms control.

Altmann outlined the ultimate goal of nanotechnology as the full control of the 3-d position of each atom – the role model is the living cell, but the goal goes much beyond this, going beyond systems optimised for aqueous environments to those that work in vacuum, high pressure, space etc., limited only by the laws of nature. Altmann alluded to the controversy surrounding Drexler’s vision of nanotechnology, but insisted that no peer-reviewed publication had succeeded in refuting it.

He mentioned the extrapolations of Moore’s law due to Kurzweil, with the prediction that we will have a computer with a human being’s processing power by 2035. He discussed new nanomaterials, such as ultra-strong carbon nanotubes making the space elevator conceivable, before turning to the Drexler vision of mechanosynthesis, leading to a universal molecular assembler, and discussing consequences like space colonies and brain downloading, before highlighting the contrasting utopian and dystopian visions of the outcome – one the one hand, infinitely long life, wealth without work and clean environment, on the other hand, the consumption of all organic life by proliferating nanorobots (grey goo).

He connected these visions to transhumanism – the idea that we could and should accelerate human evolution by design, and the perhaps better accepted notion of converging technologies – NanoBioInfoCogno – which has taken up somewhat different connotations either side of the Atlantic (Altmann was on the working group which produced the EU document on converging technologies). He foresaw the benefits arising on a 20 year timescale, notably direct broad-band interfaces between brain and machines.

What, then, of the risks? There is the much discussed issue of nanoparticle toxicity. How might nanotechnology affect developing countries – will the advertised benefits really arise? We have seen a mapping of nanotechnology benefits onto the Millennium Development Goals looked by the Meridian Institute. But this has been criticised, for example by N. Invernizzi, (Nanotechnology Law and Business Journal 2 101-11- (2005)). High productivity will mean less demand for labour, there might be a tendency to neglect non-technological solutions, there might be a lack of qualified personnel. He asked what will happen if India and China succeed with nano, will that simply increase internal rich-poor divisions within those countries? The overall conclusion is that socio-economic factors are just as important as technology.

With respect to military nanotechnology, there are many potential applications, including smaller and faster electronics and sensors, lighter and faster armour and armoured vehicles, miniature satellites, including offensive ones. Many robots will be developed, including nano-robots, including biotechnical hybrids – electrode controlled rats and insects. Medical nanobiotechnology will have military applications – capsules for controlled release of biological and chemical agents, mechanisms for targeting agents to specific organs, but also perhaps to specific gene patterns or proteins, allowing chemical or biological warfare to be targeted against specific populations.

Military R&D for nano is mostly done in the USA, where it accounts for 1/4 – 1/3 of federal funding. At the moment, the USA spends 4-10 times as much as the rest of the world, but perhaps we can shortly expect other countries with the necessary capacity, like China and Russia, to begin to catch up.

The problem of military nanotechnology from an arms control point of view is that limitation and verification is very difficult – much more difficult than the control of nuclear technology. Nano is cheap and widespread, much more like biotechnology, with many non-military uses. Small countries and non-state actors can use high technology. To control this will need very intrusive inspection and monitoring – anytime, anyplace. Is this compatible with military interest in secrecy and the fear of industrial espionage?

So, Altmann asks, Is the current international system up to this threat? Probably not, he concludes, so we have two alternatives: increasing military and terrorist threats and marked instability, or the organisation of global security in another way, involving some kind of democratic superstate, in which existing states voluntarily accept reduced sovereignty in return for greater security.

Coherent “atoms” in (fairly) warm solids

In 2001, Eric Cornell, Wolfgang Ketterle and Carl Wieman won the Nobel prize for physics for demonstrating the phenomenon of Bose-Einstein condensation in a system of trapped ultra-cold atoms. Bose-Einstein condensation is a remarkable quantum phenomenon in which a system of particles all occupy the same quantum state. In this condition they are identical and indistinguishable – in effect the individual atoms have lost their identities and coalesced into a single coherent quantum blob. Now researchers have demonstrated the same phenomenon in a different type of particle, polaritons, confined in a semiconductor nanostructure, at a temperature of 4.2 K. This is not exactly ambient, but it is much more convenient than the temperature of 20 nanoKelvin needed for the atom experiments.

The experiments, reported in this article in Science (abstract, subscription required for full article), were done by grad students Ryan Balili and Vincent Hartwell in David Snoke’s group at the University of Pittsburgh, in collaboration with Loren Pfeiffer and Kenneth West from Bell Labs. The basic structure consisted of a semiconductor quantum well trapped between a pair of reflectors, each made up of alternating dielectric layers, rather like the one shown in the picture in this earlier post. If a laser is shone into the structure, pairs of electrons and holes are generated; these pairs of charge are bound together by the electrostatic interaction and behave like particles called excitons. Meanwhile, light bounces back between the two mirrors, forming standing wave modes. Energy bounces back and forward between these standing wave photons and excitons, and the combination forms a quasi-particle called a polariton.

How on earth can one compare an entity that is composed of a complicated set of interactions between light and matter with something simple and elementary like an atom? The answer to this is rather interesting, and relies on a principle of solid state physics that is fundamental to the subject, but little known outside the field. Simple theory tells us how to understand systems composed out of entities that don’t interact with each other very much; the first theory of electrons in solids one gets taught simply assumes that the electrons don’t interact with each other at all, which on the face of it is absurd because they are charged objects which strongly repel each other. It turns out that you can often lump together the basic entity together with all its associated interactions as a “quasi-particle”, which behaves just like a simple, quantum mechanical particle. The particle is characterised by an “effective mass” which, in the case of these polaritons, is very much smaller than a real atom. It is this very small mass which allows them to form a Bose-Einstein condensate at (relatively) high temperatures.

This is another great example of how being able to make precisely specified semiconductor nanostructures allows one to tune the interaction between light and matter to produce remarkable new effects. What use could this have in the future? Peter Littlewood, from the Cavendish Laboratory in Cambridge, writes in a commentary in Science (subscription required):

“These objects are, on the one hand, a new kind of low-threshold laser, but the fact that they consist of coherent quantum objects (unlike a regular laser) puts them potentially in the class of quantum devices. A rash speculation is that a small polariton condensate could become the basis for an elementary quantum computer, but the easy coupling to light might simplify the wiring issues that many quantum information technologies find challenging.”

Everyware

This week’s Economist has a very interesting survey of the future of wireless technology, which assesses progress towards ubiquitous computing and “the internet of things” – the idea that in the near future pretty well every artefact will carry its own computing power, able to sense its environment and communicate wirelessly with other artefacts and computer systems. The introductory article and the (rather useful) list of sources and links (including the book by Adam Greenfield – Everyware: The Dawning Age of Ubiquitous Computing – whose title I’ve appropriated for my post) are freely available; for the other seven articles you need a subscription (or you could just buy a copy from the newstand).

Evolutionary nanotechnology is likely to contribute to these developments in at least two ways; by making possible a wide range of sensors able to detect, for example, very small concentrations of specific chemicals in the environment, and, through technologies like plastic electronics, by making possible the mass-production of rudimentary computing devices at tiny cost. Even with current technology, these developments are sure to raise privacy and security issues, but equally may make possible unimagined benefits in areas such as health and energy efficiency. The Economist’s survey finishes on an uncharacteristically humble note: “There is no saying how it will be used, other than it will surprise us.”

Where should I go to study nanotechnology?

The following is a message from my sponsor… or at least, the institution that pays my salary…

What advice should one give to young people who wish to make a career in nanotechnology? It’s a very technical subject, so you won’t generally get very far without a good degree level grounding in the basic, underlying science and technology. There are some places where one can study for a first degree in nanotechnology, but in my opinion it’s better to obtain a good first degree in one of the basic disciplines – whether a pure science, like physics or chemistry, or an engineering specialism, like electronic engineering or materials science. Then one can broaden one’s education at the postgraduate level, to get the essential interdisciplinary skills that are vital to make progress in nanotechnology. Finally, of course, one usually needs the hands-on experience of research that most people obtain through the apprenticeship of a PhD.

In the UK, the first comprehensive, Masters-level course in Nanoscale Science and Technology was developed jointly by the Universities of Leeds and Sheffield (I was one of the founders of the course). As the subject has developed and the course has flourished, it has been expanded to offer a range of different options – the Nanotechnology Education Portfolio – nanofolio. Currently, we offer MSc courses in Nanoscale Science and Technology (the original, covering the whole gamut of nanotechnology from the soft to the hard), Nanoelectronics and nanomechanics, Nanomaterials for nanoengineering and Bionanotechnology.

The course website also has a general section of resources that we hope will be useful to anybody interested in nanotechnology, beginning with the all-important question “What is nanotechnology?” Many more resources, including images and videos, will be added to the site over the coming months.

Integrating nanosensors and microelectronics

One of the most talked-about near term applications of nanotechnology is in in nanosensors – devices which can detect the presence of specific molecules at very low concentrations. There are some obvious applications in medicine; one can imagine tiny sensors implanted in one’s body, which continuously monitor the concentration of critical biochemicals, or the presence of toxins and pathogens, allowing immediate corrective action to be taken. A paper in this week’s edition of Nature (editor’s summary here, subscription required for full article) reports an important step forward – a nanosensor made using a process that is compatible with the standard methods for making integrated circuits (CMOS). This makes it much easier to imagine putting these nanosensors into production and incorporating them in reliable, easy to use systems.

The paper comes from Mark Reed’s group at Yale. The fundamental principle is not new – the idea is that one applies a voltage across a very thin semiconductor nanowire. If molecules adsorb at the interface between the nanowire and the solution, there is a change in electrical charge at the interface. This creates an electric field which has the effect of changing the electrical conductivity of the nanowire; the amount of current flowing through the wire then tells you about how many molecules have stuck to the surface. By coating the surface with molecules that specifically stick to the chemical that one wants to look for, one can make the sensor specific for that chemical. Clearly, the thinner the wire, the more effect the surface has in proportion, hence the need to use nanowires to make very sensitive sensors.

In the past, though, such nanowire sensors have been made by chemical processes, and then painstakingly wiring them up to the necessary micro-circuit. What the Reed group has done is devised a way of making the nanowire in-situ on the same silicon wafer that is used to make the rest of the circuitry, using the standard techniques that are used to make microprocessors. This makes it possible to envisage scaling up production of these sensors to something like a commercial scale, and integrating them a complete electronic system.

How sensitive are these devices? In a test case, using a very well known protein-receptor interaction, they were able to detect a specific protein at a concentration of 10 fM – that translates to 6 billion molecules per litre. As expected, small sensors are more sensitive than large ones; a typical small sensor had a nanowire 50 nm wide and 25 nm thick. From the published micrograph, the total size of the sensor is about 20 microns by 45 microns.

Nature Nanotechnology

I’ve been meaning to write for a while about the new journal from the Nature stable – Nature Nanotechnology (there’s complete free web access to this first edition). I’ve written before about the importance of scientific journals in helping relatively unformed scientific fields to crystallise, and the fact that this journal comes with the imprint of the very significant “Nature” brand means that the editorial policy of this new journal will have a big impact on the way the field unfolds over the next few years.

Nature is, of course, one of the two rivals for the position as the most important and influential science publication in the world. Its US rival is Science. While Science is published by the non-profit American Association for the Advancement of Science, Nature, for all its long history, is a ruthlessly commercial operation, run by the British publishing company Macmillan. As such, it has been recently expanding its franchise to include a number of single subject journals, starting with biological titles like Nature Cell Biology, moving into the physical sciences with Nature Materials and Nature Physics, and now adding Nature Nanotechnology. Given the fact that just about everybody is predicting the end of printed scientific journals in the face of web-based preprint servers and open access models, how, one might ask, do they expect to make money out of this? The answer is an interesting one, in that it is to emphasise some old-fashioned publishing values, like the importance of a strong editorial hand, the value of selectivity and the role of design and variety. These journals are nice physical objects, printed on paper of good enough quality to read in the bath, and they have a thick front section, with general interest articles and short reviews, in addition to the highly selective selection of research papers at the back of the journal. What the subscriber pays for (and their marketing is heavily aimed at individual subscribers rather than research libraries) is the judgement of the editors in selecting the handful of outstanding papers in their field each month. It seems that the formula has, in the past, been successful, at least to the extent that the Nature journals have consistently climbed to the top of their subject league tables in the impact of the papers they publish.

So how is Nature Nanotechnology going about defining its field? This is an interesting question, in that at first sight there looks to be considerable overlap with existing Nature group journals. Nature Materials, in particular, has already emerged as a leading journal in areas like nanostructured materials and polymer electronics, which are often included in wider definitions of nanotechnology. It’s perhaps too early to be making strong judgements about editorial policies yet, but the first issue seems to have a strong emphasis on truly nanoscale devices, with a review article on molecular machines, and the lead article describing a single nanotube based SQUID (superconducting quantum interference device). The front material makes a clear statement about the importance of wider societal and environmental issues, with an article from Chris Toumey about the importance of public engagement, and a commentary from Vicki Stone and Ken Donaldson about the relationship between nanoparticle toxicity and oxidative stress.

I should declare an interest, in that I have signed up to write a regular column for Nature Nanotechnology, with my first piece to appear in the November edition. The editor is clearly conscious enough of the importance of new media to give me a contract explicitly stating that my columns shouldn’t also appear on my blog.

Software control of matter at the atomic and molecular scale

The UK’s physical sciences research council, the EPSRC, has just issued a call for an “ideas factory” with the theme “Software control of matter at the atomic and molecular scale”, a topic proposed by Nottingham University nanophysicist Philip Moriarty. The way these programs work is that 20-30 participants, selected from many different disciplines, spend a week trying to think through new and innovative approaches to a very challenging problem. At the end of the process, it is hoped that some definite research proposals will emerge, and £1.5 million (i.e. not far short of US$ 3 million) has been set aside to fund these. The challenge, as defined by the call, is as follows:

“Can we design and construct a device or scheme that can arrange atoms or molecules according to an arbitrary, user-defined blueprint? This is at the heart of the idea of the software control of matter – the creation, perhaps, of a “matter compiler” which will interpret software instructions to output a macroscopic product in which every atom is precisely placed. Even partial progress towards this goal would significantly open up the range of available functional materials, permitting meta-materials with interesting electronic, optoelectronic, optical and magnetic properties.

One route to this goal might be to take inspiration from 3-d rapid prototyping devices, and conceive of some kind of pick-and-place mechanism operating at the atomic or molecular level, perhaps based on scanning probe techniques. On the other hand, the field of DNA nanotechnology gives us examples of complex structures built by self- assembly, in which the program to guide the construction is implicit within the structure of the building blocks themselves. This problem, then, goes beyond surface chemistry and the physics of self-assembly to some fundamental questions in computer science.

This ideas factory should attract surface physicists and chemists, including specialists in scanning probe and nanorobotic techniques, and those with an interest in self-assembling systems. Theoretical chemists, developmental biologists, and computer scientists, for example those interested in agent-based and evolutionary computing methods and emergent behaviour, will also be able to contribute. “

I’d encourage anyone who is eligible to receive EPSRC research funding (i.e. scientists working in UK universities and research institutes, broadly speaking) who is interested in taking part in this event to apply using the form on the EPSRC website. One person who won’t be getting any funding from this is me, because I’ve accepted the post of director of the activity.

A brief update

My frequency of posting has gone down in the last couple of weeks due to a combination of excessive busy-ness and a not wholly successful attempt to catch up with stuff before going on holiday. Here’s a brief overview of some of the things I would have written about if I’d had more time.

The Nanotechnology Engagement Group (which I chair) met last week to sketch out some of the directions of its second policy report, informed in part by an excellent workshop – Terms of Engagement – held in London a few weeks ago. The workshop brought together policy-makers, practitioners of public engagement, members of the public who had been involved in public engagement events about nanotechnology, and scientists, to explore the different expectations and aspirations these different actors have, and the tensions that arise when these expectations aren’t compatible.

The UK government’s funding body for the physical sciences, EPSRC, held a town meeting to discuss its new draft nanotechnology strategy last week. About 50 of the UKs leading nanoscientists attended; To summarise the mood of the meeting, people were pleased that EPSRC was drawing up a strategy, but they thought that the tentative plan was not nearly ambitious enough. EPSRC and its Strategic Working Group on Nanotechnology (of which I am a member) will be revising the draft strategy in line with these comments and the result should be presented to EPSRC Council for approval in October.

The last two issues of Nature have much to interest the nanotechnologist. Nanotubes unwrapped introduces the idea of using exfoliated graphite as a reinforcing material in composites; this should produce many of the advantages that people hope for in nanotube composites (but which have not yet so far fully materialised) at much lower cost. Spintronics at the atomic level describes a very elegant experiment in which a single manganese atom is introduced as a substitutional dopant on a gallium arsenide surface using a scanning tunnelling microscope, to probe its magnetic interactions with the surroundings. This week’s issue also includes a very interesting set of review articles about microfluidics, including pieces by George Whitesides and Harold Craighead, to which there is free access.

Rob Freitas has put together a website for his Nanofactory collaboration. Having complained on this blog before that my own critique of MNT proposals has been ignored by MNT proponents, it’s only fair for me to recognise that this site has a section about technical challenges which explicitly acknowledges such critiques with these positive words:
“This list, which is almost certainly incomplete, parallels and incorporates the written concerns expressed in thoughtful commentaries by Philip Moriarty in 2005 and Richard Jones in 2006. We welcome these critiques and would encourage additional constructive commentary – and suggestions for additional technical challenges that we may have overlooked – along similar lines by others.”

Finally, in a not totally unrelated development, the UKs funding council, EPSRC, will be running an Ideas Factory on the subject of Matter compilation via molecular manufacturing: reconstructing the wheel. The way this program works is that participants spend a week generating new ideas and collaborations, and at the end of it £1.45 million funding is guaranteed for the best proposals. I’ve been asked to act as the director of this activity, which should take place early in the New Year.