It’s all about metamaterials

A couple of journalists have recently asked me some questions about the EPSRC Ideas Factory on software control of matter that I am directing in January. The obvious question is whether software control of matter – which was defined as “a device or scheme that can arrange atoms or molecules according to an arbitrary, user-defined blueprint” – will be possible. I don’t know the answer to this – in some very limited sense (for example, the self-assembly of nanostructures based on DNA molecules with specified sequences) it is possible now, but whether these very tentative steps can be fully generalised is not yet clear (and if it was clear, then there would be no point in having the Ideas Factory). More interesting, perhaps is the question of what one would do with such a technology if one had it. Would it lead to, for example, the full MNT vision of Drexler, with personal nanofactories based on the principles of mechanical engineering executed with truly atomic precision?

I don’t think so. I’ve written before of the difficulties that this project would face, and I don’t want to repeat that argument here. Instead, I want to argue that this mechanically focused vision of nanotechnology actually misses the biggest opportunity that this level of control over matter would offer – the possibility of precisely controlling the interactions between electrons and light within matter. The key idea here is that of the “metamaterial”, but the potential goes much further than simply designing materials: instead, the prize is the complete erosion of the distinction we have now between a “material” and a “device”.

A “metamaterial” is the name given to a nanoscale arrangement of atoms that gives rise to new electronic,magnetic or optical properties that would not be obtainable in a single, homogenous material. It’s been known for some time, for example, that structures of alternating layers of different semiconductors can behave, as far as an electron is concerned, as a new material with entirely new semiconducting properties. The confinement of electrons in “quantum dots” – nanoscale particles of semiconductors – profoundly changes the quantum states allowed to an electron, and clever combinations of quantum dots and layered structures yield novel lasers now, and the promise of quantum information processing devices in the future. For light, the natural gemstone opal – formed by the self-assembly of spherical particles in ordered arrays – offers a prototype for metamaterials that interact with light in interesting and useful ways. This field has been recently energised by the theoretical work of John Pendry, , at Imperial College, who has demonstrated that in principle arrays of patterned dielectrics and conductors can behave as materials with a negative refractive index.

This notion of optical metamaterials has achieved media notoriety as a route to making “invisibility cloaks” (see this review in Science for a more sober assessment). But the importance of these materials is much more general than that – in principle, if one can arrange the components of the metamaterial with nanoscale precision to some pattern that one calculates, one can guide light to go pretty much anywhere. If you combine this with the ability from semiconductor nanotechnology to manipulate electronic states, and from magnetic nanotechnology to manipulate electron spin, one has the potential for an integrated information technology of huge power. This will probably use not just the charge of the electron, as is done now, but its spin (spintronics) and/or its quantum state (quantum computing). There are, of course, some big ifs here, and I’m far from being confident that the required degree of generality, precision and control is possible. But I am sure that if something like a “matter compiler” is possible, it is manipulating photons and electrons, rather than carrying out fundamentally mechanical operations, that its products will be used for.

Five challenges for nano-safety

This week’s Nature has a Commentary piece (editor’s summary here, subscription required for full article) from the great and good of nanoparticle toxicology, outlining what they believe needs to be done, in terms of research, to ensure that nanotechnology is developed safely. As they say, “fears over the possible dangers of some nanotechnologies may be exaggerated, but they are not necessarily unfounded,” and without targeted and strategic risk research public confidence could be lost and innovation held up through fear of litigation.

Their list of challenges is intended to form a framework for research over the next fifteen years; the wishlist is as follows:

  • Develop instruments to assess exposure to engineered nanomaterials in air and water, within the next 3–10 years.
  • Develop and validate methods to evaluate the toxicity of engineered nanomaterials, within the next 5–15 years.
  • Develop models for predicting the potential impact of engineered nanomaterials on the environment and human health, within the next 10 years.
  • Develop robust systems for evaluating the health and environmental impact of engineered nanomaterials over their entire life, within the next 5 years.
  • Develop strategic programmes that enable relevant risk-focused research, within the next 12 months.
  • Some might think it slightly odd that what amounts to a research proposal is being published in Nature. They give a positive view for stressing this program now. “Nanotechnology comes at an opportune time in the history of risk research. We have cautionary examples from genetically modified organisms and asbestos industries that motivate a real interest, from all stakeholders, to prevent, manage and reduce risk proactively.” Some indication of the potential downside of failing to be seen to move on this is seen in the recent results of a citizen’s jury on nanotechnology in Germany, reported today here (my thanks to Niels Boeing for bringing this to my attention). These findings seem notably more sceptical than the findings of similar processes in the UK.

    Biological computing on the radio

    I’m doing a live interview for the BBC Radio 4 science program The Material World in a couple of hours, at 4.30 pm UK time. The subject of the segment is biocomputing, and the other guest is the computer scientist and author Martyn Amos, whose blog you can read here, who has just published a nice book on the subject, Genesis Machines. You can listen to the broadcast over the internet, either live or up to a week from now, here.

    I’m also doing a Café Scientifique in Mumbai and Kolkata tomorrow, by video link, sponsored by the British Council.

    On nanotechnology and biology

    The second issue of Nature Nanotechnology is now available on-line (see here for my comments on the first issue). I think this issue is also free to view, but from next month a subscription will be required.

    Among the articles is an overview of nanoelectronics, based on a report from a recent conference, and a nice letter from a Belgian group describing the placement and reaction of individual macromolecules at surfaces using an AFM . The regular opinion column this month is contributed by me, and concerns one of my favourite themes: Is it possible to use modern science and engineering techniques to improve on nature, or has evolution already found the best solutions?

    Silicon and steel

    Two of the most important materials underpinning our industrial society are silicon and steel. Without silicon, the material from which microprocessors and memory chips are made, there would be no cheap computers, and telecommunications would be hugely less powerful and more expensive. Steel is at the heart of most building and civil engineering, making possible both cars and trucks and the roads they run on. So I was struck, while reading Vaclav Smil’s latest book, Transforming the Twentieth Century (about which I may write more later) by some contrasting statistics for the two materials.

    In the year 2000, around 846 million tonnes of steel was produced in the world, dwarfing the 20,000 tonne production of pure silicon. In terms of value, the comparison is a little closer – at around $600 a tonne, the annual production of steel was worth $500 billion, compared to the $1 billion value of silicon. Smil quotes a couple of other statistical nuggets, which may have some valuable lessons for us when we’re considering the possible economic impacts of nanotechnology.

    Steel, of course, has been around a long time as a material, but it’s easy to overlook how significant technological progress in steel-making has been. In 1920, it took the equivalent of 3 hours of labour to make 1 tonne of steel, but by 1999, this figure had fallen to about 11 seconds – a one thousand-fold increase in labour productivity. When people suggest that advanced nanotechnologies may cause social dislocation, by throwing workers in manufacturing and primary industries out of work, they’re fighting yesterday’s battle – this change has already happened.

    As for silicon, what’s remarkable about it is how costly it is given the fact that it’s made from sand. One can trace the addition of value through the production chain. Pure quartz costs around 1.7 cents a kilogram; after reduction to metalurgical grade silicon the value has risen to $1.10 a kilo. This is transformed into trichlorosilane, at $3 a kilo, and then after many purification processes one has pure polycrystalline silicon at around $50 a kilo. Single crystal silicon is then grown from this, leading to monocrystalline silicon rod worth more than $500 a kilo, which is then cut up into wafers. One of the predictions one sometimes hears about advanced nanotechnology is that it will be particularly economically disruptive, because it will allow anything to be made from abundant and cheap elements like carbon. But this example shows the extent to which the value of products doesn’t necessarily reflect the cost of the raw ingredients at all. In fact, in cases like this, involving complicated transformations carried out with high-tech equipment, it’s the capital cost of the plant that is most important in determining the cost of the product.

    Nature Nanotechnology

    I’ve been meaning to write for a while about the new journal from the Nature stable – Nature Nanotechnology (there’s complete free web access to this first edition). I’ve written before about the importance of scientific journals in helping relatively unformed scientific fields to crystallise, and the fact that this journal comes with the imprint of the very significant “Nature” brand means that the editorial policy of this new journal will have a big impact on the way the field unfolds over the next few years.

    Nature is, of course, one of the two rivals for the position as the most important and influential science publication in the world. Its US rival is Science. While Science is published by the non-profit American Association for the Advancement of Science, Nature, for all its long history, is a ruthlessly commercial operation, run by the British publishing company Macmillan. As such, it has been recently expanding its franchise to include a number of single subject journals, starting with biological titles like Nature Cell Biology, moving into the physical sciences with Nature Materials and Nature Physics, and now adding Nature Nanotechnology. Given the fact that just about everybody is predicting the end of printed scientific journals in the face of web-based preprint servers and open access models, how, one might ask, do they expect to make money out of this? The answer is an interesting one, in that it is to emphasise some old-fashioned publishing values, like the importance of a strong editorial hand, the value of selectivity and the role of design and variety. These journals are nice physical objects, printed on paper of good enough quality to read in the bath, and they have a thick front section, with general interest articles and short reviews, in addition to the highly selective selection of research papers at the back of the journal. What the subscriber pays for (and their marketing is heavily aimed at individual subscribers rather than research libraries) is the judgement of the editors in selecting the handful of outstanding papers in their field each month. It seems that the formula has, in the past, been successful, at least to the extent that the Nature journals have consistently climbed to the top of their subject league tables in the impact of the papers they publish.

    So how is Nature Nanotechnology going about defining its field? This is an interesting question, in that at first sight there looks to be considerable overlap with existing Nature group journals. Nature Materials, in particular, has already emerged as a leading journal in areas like nanostructured materials and polymer electronics, which are often included in wider definitions of nanotechnology. It’s perhaps too early to be making strong judgements about editorial policies yet, but the first issue seems to have a strong emphasis on truly nanoscale devices, with a review article on molecular machines, and the lead article describing a single nanotube based SQUID (superconducting quantum interference device). The front material makes a clear statement about the importance of wider societal and environmental issues, with an article from Chris Toumey about the importance of public engagement, and a commentary from Vicki Stone and Ken Donaldson about the relationship between nanoparticle toxicity and oxidative stress.

    I should declare an interest, in that I have signed up to write a regular column for Nature Nanotechnology, with my first piece to appear in the November edition. The editor is clearly conscious enough of the importance of new media to give me a contract explicitly stating that my columns shouldn’t also appear on my blog.