A culture of improvement

If one wants to comment on the future of technology, it’s a good idea to have some understanding of its history. A new book by Robert Friedel,
A Culture of Improvement: Technology and the Western Millennium, takes on the ambitious task of telling the story of the development of technology in Europe and North America over the last thousand years.

The book is largely a very readable narrative history of technology, with some rather understated broader arguments. One theme is suggested by the title; in Friedel’s view the advance of technology has been driven, not so much by the spectacular advances of the great inventors, but by a mindset that continually seeks to make incremental improvements in existing technologies. The famous inventors, the James Watts and Alexander Graham Bells of history, certainly get due space, but there’s also an emphasis on placing the best-known inventions in the context of the less well known precursor technologies from which they sprung, and on the way engineers and workers continuously improved the technologies once they were introduced. Another theme is the way in which the culture of improvement was locked into place, as it were, by the institutions that promoted technical and scientific education, and the media that brought new scientific and technical ideas to a wide audience.

This provokes some revision of commonly held ideas about the relationship between science and engineering. In Friedel’s picture, the role of science has been, less to provide fundamental discoveries that engineers can convert into practical devices, and more to provide the mental framework that permits the process of incremental improvement. Those who wish to de-emphasise the importance of science for innovation often point to the example of the development of the steam engine – “thermodynamics owes much more to the steam engine than the steam engine owes to thermodynamics”, the saying goes. This of course is true as far as it goes – the academic subject of thermodynamics was founded by Sadi Carnot’s analysis of the steam engines that were already in widespread use, and which had been extensively developed without the benefit of much theoretical knowledge. But it neglects the degree to which an understanding of formal thermodynamics underlay the development of the more sophisticated types of engines that are still in use today. Rudolph Diesel’s efforts to develop the engine that bears his name, and which is now so important, were based on an explicit project to use the thermodynamics he had learned from his professor, Carl Linde (who also made huge contributions to the technology of refrigeration), to design the most efficient possible internal combustion engine.

Some aspects of the book are open to question. The focus on Europe, and the European offshoots in North America, is justified by the premise that there was something special in this culture that led to the “culture of improvement”; one could argue, though, that the period of unquestioned European technological advantage was a relatively short fraction of the millennium under study (it’s arguable, for example, that China’s medieval technological lead over Europe persisted well into the 18th century). And many will wonder whether technological advances always lead to “improvement”. A chapter on “the corruption of improvement” discusses the application of technology to weapons of mass destruction, but one feels that Friedel’s greatest revulsion is prompted by the outcome of the project to apply the culture of improvement to the human race itself. It’s useful to be reminded that the outcome of this earlier project for “human enhancement” was, particularly in the USA and Scandinavia, a programme of forced sterilisation of those deemed unfit to reproduce that persisted well into living memory. In Germany, of course, this “human enhancement” project moved beyond sterilisation to industrial-scale systematic murder of the disabled and those who were believed to be threats to “racial purity”.

Another UK government statement on nanotechnology

As I mentioned on Wednesday, the UK government took the opportunity of Thursday’s nano-summit organised by the consumer advocate group Which? to release a statement about nanotechnology. The Science Minister’s speech didn’t announce anything new or dramatic – the minister did “confirm our commitment to keep nanotechnology as a Government priority”, though as the event’s chair, Nick Ross, observed, the Government has a great many priorities. The full statement (1.3 MB PDF) is at least a handy summary of what otherwise would be a rather disjointed set of measures and activities.

The other news from the Which? event was the release of the report from their Citizen’s Panel. Some summaries, as well as a complete report, are available from the Which? website. Some flavour of the results can be seen in this summary: “Panellists were generally excited about the potential that nanotechnologies offer and were keen to move ahead with developing them. However, they also recognised the need to balance this with the potential risks. Panellists identified many opportunities for nanotechnologies. They appreciated the range of possible applications and certain specific applications, particularly for health and medicine. The potential to increase consumer choice and to help the environment were also highlighted, along with the opportunity to ‘start again’ by designing new materials with more useful properties. Other opportunities they highlighted were potential economic developments for the UK (and the jobs this might create) and the potential to help developing countries (with food or cheaper energy).” Balanced against this generally positive attitude were concerns about safety, regulation, information, questions about the accessibility of the technology to the poor and the developing world, and worries about possible long-term environmental impacts.

The subject of nanotechnology was introduced at the meeting with this short film.

Which nanotechnology?

It seems likely that nanotechnology will move a little higher up the UK news agenda towards the end of this week – tomorrow sees the launch event for the results of a citizens’ panel run by the consumer group Which?. This will be quite a high profile event, with a keynote speech by the Science Minister, Ian Pearson, outlining current UK nanotechnology policy. This will be the first full statement on nanotechnology at Ministerial level for some time. I’m one the panel responding to the findings, which I will describe tomorrow.

Drew Endy on Engineering Biology

Martyn Amos draws our attention to a revealing interview from MIT’s Drew Endy about the future of synthetic biology. While Craig Venter up to now monopolised the headlines about synthetic biology, Endy has an original and thought-provoking take on the subject.

Endy is quite clear about his goals: “The underlying goal of synthetic biology is to make biology easy to engineer.” In pursuing this, he looks to the history of engineering, recognising the importance of things like interchangeable parts and standard screw gauges, and seeks a similar library of modular components for biological systems. Of course, this approach must take for granted that when components are put together they behave in predictable ways: “Engineers hate complexity. I hate emergent properties. I like simplicity. I don’t want the plane I take tomorrow to have some emergent property while it’s flying.” Quite right, of course, but since many suspect that life itself is an emergent property one could wonder how much of biology will be left after you’ve taken the emergence out.

Many people will have misgivings about the synthetic biology enterprise, but Endy is an eloquent proponent of the benefits of applying hacker culture to biology: “Programming DNA is more cool, it’s more appealing, it’s more powerful than silicon. You have an actual living, reproducing machine; it’s nanotechnology that works. It’s not some Drexlarian (Eric Drexler) fantasy. And we get to program it. And it’s actually a pretty cheap technology. You don’t need a FAB Lab like you need for silicon wafers. You grow some stuff up in sugar water with a little bit of nutrients. My read on the world is that there is tremendous pressure that’s just started to be revealed around what heretofore has been extraordinarily limited access to biotechnology.”

His answer to societal worries about the technology, then, is an confidence in the power of open source ideals, common ownership rather than corporate monopoly for the intellectual property, and an assurance that an open technology will automatically be applied to solve pressing societal problems.

There are legitimate questions about this vision of synthetic biology, both as to whether it is possible and whether it is wise. But to get some impression of the strength of the driving forces pushing this way, take a look at this recent summary of trends in DNA synthesis and sequencing. “Productivity of DNA synthesis technologies has increased approximately 7,000-fold over the past 15 years, doubling every 14 months. Costs of gene synthesis per bases pair have fallen 50-fold, halving every 32 months.” Whether this leads to synthetic biology in the form anticipated by Drew Endy, the breakthrough into the mainstream of DNA nanotechnology, or something quite unexpected, it’s difficult to imagine this rapid technological development not having far-reaching consequences.

Carbon nanotubes as engineering fibres

Carbon nanotubes have become iconic symbols of nanotechnology, promising dramatic new breakthroughs in molecular electronics and holding out the possibility of transformational applications like the space elevator. Another perspective on these materials places them, not as a transformational new technology, but as the continuation of incremental progress in the field of high performance engineering fibres. This perhaps is a less dramatic way of positioning this emerging technology, but it may be more likely to bring economic returns in the short term and thus keep the field moving. A perspective article in the current issue of Science magazine – Making strong fibres (subscription required), by Han Gi Chae and Satish Kumar from Georgia Tech, nicely sets current achievements in developing carbon nanotube based fibres in the context of presently available high strength, high stiffness fibres such as Kevlar, Dyneema, and carbon fibres.

The basic idea underlying all these fibres is the same, and is easy to understand. Carbon-carbon covalent bonds are very strong, so if you can arrange in a fibre made from a long-chain molecule that all the molecules are aligned along the axis of the fibre, then you should end up pulling directly on the very strong carbon-carbon bonds. Kevlar is spun from a liquid crystal precursor, in which its long, rather rigid molecules spontaneously line up like pencils in a case, while Dyneema is made from very long polyethylene molecules that are physically pulled out straight during the spinning process. Carbon fibres are typically made by making a highly aligned fibre from a polymer like polyacrylonitrile, that is then charred to leave graphitic carbon in the form of bundles of sheets, like a rolled up newspaper. If you could make a perfect bundle of carbon nanotubes, all aligned along the direction of the fibre, it would be almost identical to a carbon fibre chemically, but with a state of much greater structural perfection. This idea of structural perfection is crucial. The stiffness of a material pretty much directly reflects the strength of the covalent bonds that make it up, but strength is actually a lot more complicated. In fact, what one needs to explain about most materials is not why they are as strong as they are, but why they are so weak. It is all the defects in materials – and the weak spots they lead to – which mean they rarely get even close to their ideal theoretical values. Carbon nanotubes are no different, so the projections of ultra-high strength that underlie ideas like the space elevator are still a long way off when it comes to practical fibres in real life.

But maybe we shouldn’t be disappointed by the failure of nanotubes (so far) to live up to these very high expectations, but instead compare them to existing strong fibres. This has been the approach of Cambridge’s Alan Windle, whose group probably is as far ahead as anyone in developing a practical process for making useful nanotube fibres. Their experimental rig (see this recent BBC news report for a nice description, with videos) draws a fibre out from a chemical vapour deposition furnace, essentially pulling out smoke. The resulting nanotubes are far from being the perfect tubes of the typical computer visualisation, typically looking more like dog-bones than perfect cylinders (see picture below). Their strength is a long way below the ideal values – but it is still 2.5 times greater than the strongest currently available fibres. They are very tough, as well, suggesting that early applications might be in things like bullet proof vests and flak jackets, for which, sadly, there seems to be growing demand. Another interesting early application of nanotubes highlighted by the Science article is as processing aids for conventional carbon fibres, where it seems that the addition of only 1% of carbon nanotubes to the precursor fibre can increase the strength of the resulting carbon fibre by 64%.

Nanotubes from the Windle group
“Dogbone” carbon nanotubes produced by drawing from a CVD furnace. Transmission electron micrograph by Marcelo Motta, from the Cambridge research group of Alan Windle. First published in M. Motta et al. “High Performance Fibres from ‘Dog-Bone’ Carbon Nanotubes”. Advanced Materials, 19, 3721-3726, 2007.

Scooby Doo, nano too

Howard Lovy returns to his coverage of nanotechnology in popular culture with news of a forthcoming film, Nano Dogs the Movie, in which some lovable family pets acquire super abilities after scoffing some carelessly abandoned nanobots. Not to be outdone, I’ve been conducting my own in-depth cultural research, which has revealed that no less an icon of saturday morning children’s TV than Scooby Doo has fully entered the nanotechnology age.

In the current retooling of this venerable cartoon, Shaggy and Scooby Doo Get a Clue, the traditional plot standbys (it was the janitor, back-projecting the ghostly figures onto the clouds, and he’d have got away with it if it hadn’t been for those meddling kids) have been swept away to be replaced by an evil nanobot wielding scientist. But the nanobots aren’t all bad; Scooby Doo’s traditionally energising Scooby snacks have themselves been fortified with nanobots, giving him a number of super-dog powers.

I wasn’t able to follow all the plot twists on Sunday morning, as I had to cook the children’s porridge, but it seems that the imprudent nano-scientist had attempted to mis-use his nanobots in order to make his appearance (formerly plump, ageing, balding and with a bad haircut, as you’d expect) more, well, Californian. Naturally, this all ended badly. I’ve seen some less incisive commentaries on the human (or, indeed, canine) enhancement debate.

The rain it raineth on the just

I’m optimistic in general about the prospects of solar energy; as should be well known, the total amount of energy arriving at the earth from the sun is orders of magnitude more than is needed to supply all our energy needs. The problem currently is reducing the price and hugely scaling up the production areas of photovoltaics. But, as I live in the not notoriously sunny country of Britain, someone will always want to make some sarcastic comment about how we’d be better off trying to harvest energy from rain rather than sun here. So I was pleased to see, in the midst of a commentary from Philip Ball on the general concept of scavenging energy from the environment, a reference to generating energy from falling raindrops.

The research, described in an article in physorg.com: Rain power: harvesting energy from the sky, was done by Thomas Jager, Romain Guigon, Jean-Jacques Chaillout, and Ghislain Despesse, from Minatec in Grenoble. The original work is described in two articles in the journal Smart Materials and Structures, Harvesting raindrop energy: theory and Harvesting raindrop energy: experimental study (subscription required for full article). The basic idea is very simple; it uses a piezoelectric material, which generates a voltage across its faces when it is deformed, to convert the energy imparted on the impact of a raindrop onto a surface into a pulse of electrical current. The material chosen is a polymer called poly(vinylidene fluoride), which is already extensively exploited for its piezoelectric properties in applications such as microphones and loudspeakers.

So, should we abandon plans to coat our roofs with solar cells and instead invest in rain-energy harvesting panels? It’s worth doing a back of an envelope sum. The article claims that a typical raindrop’s velocity is about 3 m/s. Taking Sheffield’s average annual rainfall of about 80 cm, we can estimate the total kinetic energy of the rain landing on a square meter in a year as 3600 J, corresponding to a power per unit area of about 3 milliwatts. This isn’t very impressive; even at these dismal northern latitudes the sun supplies about 100 W per square meter, averaged over the year. So, even accounting for the fact that PVDF is likely to be a lot cheaper than any photovoltaic material in prospect, and energy conversion efficiencies might be higher, its difficult to see any circumstances in which it would make sense to try and collect rainwater energy rather than sunlight.

Mobility at the surface of polymer glasses

Hard, transparent plastics like plexiglass, polycarbonate and polystyrene resemble glasses, and technically that’s what they are – a state of matter that has a liquid-like lack of regular order at the molecular scale, but which still displays the rigidity and lack of ability to flow that we expect from a solid. In the glassy state the polymer molecules are locked into position, unable to slide past one another. If we heat these materials up, they have a relatively sharp transition into a (rather sticky and viscous) liquid state; for both plexiglass and polystyrene this happens around 100 °C, as you can test for yourself by putting a plastic ruler or a (polystyrene) yoghourt pot or plastic cup into a hot oven. But, things are different at the surface, as shown by a paper in this week’s Science (abstract, subscription needed for full paper; see also commentary by John Dutcher and Mark Ediger). The paper, by grad student Zahra Fakhraai and Jamie Forrest, from the University of Waterloo in Canada, demonstrates that nanoscale indentations in the surface of a glassy polymer smooth themselves out at a rate that shows that the molecules near the surface can move around much more easily than those in the bulk.

This is a question that I’ve been interested in for a long time – in 1994 I was the co-author (with Rachel Cory and Joe Keddie) of a paper that suggested that this was the case – Size dependent depression of the glass transition temperature in polymer films (Europhysics Letters, 27 p 59). It was actually a rather practical question that prompted me to think along these lines; at the time I was a relatively new lecturer at Cambridge University, and I had a certain amount of support from the chemical company ICI. One of their scientists, Peter Mills, was talking to me about problems they had making films of PET (whose tradenames are Melinex or Mylar) – this is a glassy polymer at room temperature, but sometimes the sheet would stick to itself when it was rolled up after manufacturing. This is very hard to understand if one assumes that the molecules in a glassy polymer aren’t free to move, as to get significant adhesion between polymers one generally needs the string-like polymers to mix themselves up enough at the surface to get tangled up. Could it be that the chains at the surface had more freedom to move?

We didn’t know how to measure chain mobility directly near a surface, but I did think we could measure the glass transition temperature of a very thin film of polymer. When you heat up a polymer glass, it expands, and at the transition point where it turns into a liquid, there’s a jump in the value of the expansion coefficient. So if you heated up a very thin film, and measured its thickness you’d see the transition as a change in slope of the plot of thickness against temperature. We had available to us a very sensitive thickness measuring technique called ellipsometry, so I thought it was worth a try to do the measurement – if the chains were more free to move at the surface than in the bulk, then we’d expect the transition temperature to decrease as we looked at very thin films, where the surface had a disproportionate effect.

I proposed the idea as a final year project for the physics undergraduates, and a student called Rachel Cory chose it. Rachel was a very able experimentalist, and when she’d got the hang of the equipment she was able to make the successive thickness measurements with a resolution of a fraction of an Ångstrom, as would be needed to see the effect. But early in the new year of 1993 she came to see me to say that the leukemia from which she had been in remission had returned, that no further treatment was possible, but that she was determined to carry on with her studies. She continued to come into the lab to do experiments, obviously getting much sicker and weaker every day, but nonetheless it was a terrible shock when her mother came into the lab on the last day of term to say that Rachel’s fight was over, but that she’d been anxious for me to see the results of her experiments.

Looking through the lab book Rachel’s mother brought in, it was clear that she’d succeeded in making five or six good experimental runs, with films substantially thinner than 100 nm showing clear transitions, and that for the very thinnest films the transition temperatures did indeed seem to be significantly reduced. Joe Keddie, a very gifted young American scientist then working with me as a postdoc, (he’s now a Reader at the University of Surrey) had been helping Rachel with the measurements and followed up these early results with a large-scale set of experiments that showed the effect, to my mind, beyond doubt.

Despite our view that the results were unequivocal, they attracted quite a lot of controversy. A US group made measurements that seemed to contradict ours, and in the absence of any theoretical explanation of them there were many doubters. But by the year 2000, many other groups had repeated our work, and the weight of evidence was overwhelming that the influence of free surfaces led to a decrease in the temperature at which the material changed from being a glass to being a liquid in films less than 10 nm or so in thickness.

But this still wasn’t direct evidence that the chains near the surface were more free to move than they were in the bulk, and this direct evidence proved difficult to obtain. In the last few years a number of groups have produced stronger and stronger evidence that this is the case; Jamie and Zahra’s paper I think nails the final uncertainties, proving that polymer chains in the top few nanometers of a polymer glass really are free to move. Among the consequences of this are that we can’t necessarily predict the behaviour of polymer nanostructures on the basis of their bulk properties; this is going to become more relevant as people try and make smaller and smaller features in polymer resists, for example. What we don’t have now is a complete theoretical understanding of why this should be the case.

Deccelerating change?

Everyone knows the first words spoken by a man on the moon, but what were the last words? This isn’t just a good pub quiz question, it’s also an affront to the notion that technological progress moves inexorably forward. To critics of the idea that technology is relentlessly accelerating, the fact that space travel now constitutes a technology that the world has essentially relinquished is a prime argument against the idea of inevitable technological progress. The latest of such critics is David Edgerton, whose book The Shock of the Old is now out in paperback.

Edgerton’s book has many good arguments, and serves as a useful corrective to the technological determinism that characterises quite a lot of discussion about technology. His aim is to give a history of innovation which de-emphasises the importance of invention, and to this end he helpfully draws attention to the importance of those innovations which occur during the use and adaptation of technologies, often quite old ones. One very important thing this emphasis on innovation in use does is bring into focus neglected innovations of the developing world, like the auto-rickshaw of India and Bangladesh and the long-tailed boat of Thailand. This said, I couldn’t help finding the book frequently rather annoying. Its standard rhetorical starting point is to present, generally without any reference, a “standard view” of the history of technology that wouldn’t be shared by anyone who knows anything about the subject: a series of straw men, in other words. This isn’t to say that there aren’t a lot of naive views about technology in wide circulation, but to suggest, for example, that it is the “conventional story” that the atomic bomb was the product of academic science, rather than the gigantic military-industrial engineering activity of the Manhatten Project, seems particularly far-fetched.

The style of the book is essentially polemic and anecdotal, the statistics that buttress the argument tending to be of the factoid kind (such as the striking assertion that the UK is home to 3.8 million unused fondue sets). In this and many other respects I found it a much less satisfying book than Vaclav Smil’s excellent 2-volume history of modern technology, Transforming the Twentieth Century: Technical Innovations and Their Consequences and Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact. These books reach similar conclusions, though Smil’s arguments are supported by substantially more data and carry a greater impact for being less self-consciously contrarian.

Smil’s view – and I suspect that Edgerton would share it, though I don’t think he states it so explicitly – is that the period of history in which there was the greatest leap forward in technology wasn’t present times, but the thirty or forty years of the late 19th and early 20th century that saw the invention of the telephone, the automobile, the aeroplane, electricity, mass production, and most important of all, the Haber-Bosch process. What then of that symbol of what many people think of as the current period of accelerating change – Moore’s law? Moore’s law is an observation about exponential growth of computer power with time, and one should start with an obvious point about exponential growth – it doesn’t come from accelerating change, but constant fractional change. If you are able to improve a process by x% a year, you get exponential growth. Moore’s law simply tells us that the semiconductor industry has been immensely successful at implementing incremental improvements to their technology, albeit at a rapid rate. Stated this way, Moore’s law doesn’t seem so out of place in Edgerton’s narrative of technology as being dominated, not by dramatic new inventions, but by many continuous small improvements in technologies old and new. This story, though, also makes clear how difficult it is to predict, before several generations of this kind of incremental improvement, which technologies are destined to have a major and lasting impact and which ones will peter out and disappoint their proponents. For me, therefore, the lesson to take away is not that new developments in science and technology might not have major and lasting impacts on society, it is simply that some humility is needed when one tries to identify in advance what will have lasting impact and what those impacts will end up being.

On December 17th, 1972, Eugene A. Cernan said the last words by a man on the moon: “OK Jack, let’s get this mutha outta here.”

Invisibility cloaks and perfect lenses – the promise of optical metamaterials

The idea of an invisibility cloak – a material which would divert light undetectably around an object – captured the imagination of the media a couple of years ago. For visible light, the possibility of an invisibility cloak remains a prediction, but it graphically illustrates the potential power of a line of research initiated a few years ago by the theoretical physicist Sir John Pendry of Imperial College, London. Pendry realised that constructing structures with peculiar internal structures of conductors and dielectrics would allow one to make what are in effect new materials with very unusual optical properties. The most spectacular of these new metamaterials would have a negative refractive index. In addition to making an invisibility cloak possible one could in principle use negative refractive index metamaterials to make a perfect lens, allowing one to use ordinary light to image structures much smaller than the limit of a few hundred nanometers currently set by the wavelength of light for ordinary optical microscopy. Metamaterials have been made which operate in the microwave range of the electromagnetic spectrum. But to make an optical metamaterial one needs to be able to fabricate rather intricate structures at the nanoscale. A recent paper in Nature Materials (abstract, subscription needed for full article) describes exciting and significant progress towards this goal. The paper, whose lead author is Na Liu, a student in the group of Harald Giessen at the University of Stuttgart, describes the fabrication of an optical metamaterial. This consists of a regular, three dimensional array of horseshoe shaped, sub-micron sized pieces of gold embedded in a transparent polymer – see the electron micrograph below. This metamaterial doesn’t yet have a negative refractive index, but it shows that a similar structure could have this remarkable property.

An optical metamaterial
An optical metamaterial consisting of split rings of gold in a polymer matrix. Electron micrograph from Harald Giessen’s group at 4. Physikalisches Institut, Universität Stuttgart.

To get a feel for how these things work, it’s worth recalling what happens when light goes through an ordinary material. Light, of course, consists of electromagnetic waves, so as a light wave passes a point in space there’s a rapidly alternating electric field. So any charged particle will feel a force from this alternating field. This leads to something of a paradox – when light passes through a transparent material, like glass or a clear crystal, it seems at first that the light isn’t interacting very much with the material. But since the material is full of electrons and positive nuclei, this can’t be right – all the charged particles in the material must be being wiggled around, and as they are wiggled around they in turn must be behaving like little aerials and emitting electromagetic radiation themselves. The solution to the paradox comes when one realises that all these waves emitted by the wiggled electrons interfere with each other, and it turns out that the net effect is of a wave propagating forward in the same direction as the light thats propagating through the material, only with a somewhat different velocity. It’s the ratio of this effective velocity in the material to the velocity the wave would have in free space that defines the refractive index. Now, in a structure like the one in the picture, we have sub-micron shapes of a metal, which is an electrical conductor. When this sees the oscillating electric field due to an incident light wave, the free electrons in the metal slosh around in a collective oscillation called a plasmon mode. These plasmons generate both electric and magnetic fields, whose behaviour depends very sensitively on the size and shape of the object in which the electrons are sloshing around in (to be strictly accurate, the plasmons are restricted to the region near the surface of the object; its the geometry of the surface that matters). If you design the geometry right, you can find a frequency at which both the magnetic and electric fields generated by the motion of the electrons is out of phase with the fields in the light wave that are exciting the plasmons – this is the condition for the negative refractive index which is needed for perfect lenses and other exciting possibilities.

The metamaterial shown in the diagram has a perfectly periodic pattern, and this is what’s needed if you want a uniform plane wave arriving at the material to excite another uniform plane wave. But, in principle, you should be able to design an metamaterial that isn’t periodic to direct and concentrate the light radiation any way you like, on length scales well below the wavelength of light. Some of the possibilities this might lead to were discussed in an article in Science last year, Circuits with Light at Nanoscales: Optical Nanocircuits Inspired by Metamaterials (abstract, subscription required for full article) by Nader Engheta at the University of Pennsylvania. If we can learn how to make precisely specified, non-periodic arrays of metallic, dielectric and semiconducting shaped elements, we should be able to direct light waves where we want them to go on the nanoscale – well below light’s wavelength. This might allow us to store information, to process information in all-optical computers, to interact with electrons in structures like quantum dots, for quantum computing applications, to image structures using light down to the molecular level, and to detect individual molecules with great sensitivity. I’ve said this before, but I’m more and more convinced that this is a potential killer application for advanced nanotechnology – if one really could place atoms in arbitrary, pre-prescribed positions with nanoscale accuracy, this is what one could do with the resulting materials.