When buckyballs go quantum

It’s widely believed that, whereas the macroscopic world is governed by the intuitive and predictable rules of classical mechanics, the nanoscale world operates in an anarchy of quantum weirdness . I explained here why this view isn’t right; many changes in material behaviour at small scales have their origin in completely classical physics. But there’s another way of approaching this question, which is to ask what you would have to do to be able to see a nanoscale particle behaving in a quantum mechanical way. In fact, this needn’t be a thought experiment; Anton Zeilinger at the University of Vienna specialises in experiments about the foundations of quantum mechanics, and one of the themes of his research is in finding out how large an object he can persuade to behave quantum mechanically. In this context, the products of nanotechnology are large, not small, and among the biggest things he’s looked at are fullerene molecules – buckyballs. The results are described in this paper on the interference of C70 molecules.

What Zeilinger is looking for, as the signature of quantum mechanical behaviour, is interference. Quantum interference is that phenomenon which arises when the final position of a particle depends, not on the path it’s taken, but on all the paths it could have taken. Before the position of the particle is measured, the particle doesn’t exist at a single place and time; instead it exists in a quantum state which expresses all the places at which it could potentially be. But it isn’t just measurement which forces the particle (to anthropomorphise) to make up its mind where it is; if it collides with another particle or interacts with some other kind of atom, then this leads to the phenomenon known as decoherence, by which the quantum weirdness is lost and the particle behaves like a classical object. To avoid decoherence, and see quantum behaviour, Zeilinger’s group had to use diffuse beams of particles in a high vacuum environment. How good a vacuum do they need? By adding gas back into the vacuum chamber, they can systematically observe the quantum interference effect being washed out by collisions. The pressures at which the quantum effects vanish are around one billionth of atmospheric pressure. Now we can see why nanoscale objects like bucky-balls normally behave like classical objects, not quantum mechanical ones. The constant collisions with surrounding molecules completely wash out the quantum effects.

What, then, of nanoscale objects like quantum dots, whose special properties do result from quantum size effects? What’s quantum mechanical about a quantum dot isn’t the dot itself, it’s the electrons inside it. Actually, electrons always behave in a quantum mechanical way (explaining why this is so is a major part of solid state physics), but the size of the quantum dot affects the quantum mechanical states that the electrons can take up. The nanoscale particle that is the quantum dot itself, in spite of its name, remains resolutely classical in its behaviour.

The quantum bridge of asses

A good way of assessing whether a writer knows what they are talking about when it comes to nanotechnology is to look at what they say about quantum mechanics. There’s a very widespread view that what makes the nanoscale different to the macroscale is that, whereas the macroscale is ruled by classical mechanics, the nanoscale is ruled by quantum mechanics. The reality, as usual, is more complicated than this. It’s true that there are some very interesting quantum size effects that can be exploited in things like quantum dots and semiconductor heterostructures. But then lots of interesting materials and devices on the nanoscale aren’t ruled by quantum mechanics at all; for anything to do with mechanical properties, for example, nanoscale size effects have quite classical origins, and with the exception of photosynthesis almost nothing in bionanotechnology has anything to do with quantum mechanics. Conversely, there are some very common macroscopic phenomena that simply can’t be explained except in terms of quantum mechanics – the behaviour of electrical conductors and semiconductors, and the origins of magnetic materials, come immediately to mind.

Here’s a fairly typical example of misleading writing about quantum effects: The ���novel properties and functions��� are derived from ���quantum physics��� effects that sometimes occur at the nanoscale, that are very different from the physical forces and properties we experience in our daily lives, and they are what make nanotechnology different from other really small stuff like proteins and other molecules. This is from NanoSavvy Journalism, an article by Nathan Tinker. Despite its seal of approval from David Berube, this is very misleading, as we can see if we look at his list of applications of nanotechnology and ask which depend on size-dependent quantum effects.

  • Nanotechnology is used in a wide array of electronics, magnetics and optoelectronics…
  • …right so far; the use of things like semiconductor heterostructures to make quantum wells certainly does depend on exploiting qm…

  • biomedical devices and pharmaceuticals
  • …here we are talking about systems with high surface to area ratios, self-assembled structures and tailoring the interactions with biological macromolecules like proteins, all of which has nothing at all to do with qm…

  • cosmetics…
  • …if we are talking liposomes, again we’re looking at self-assembly. To explain the transparency of nanoscale titania for sunscreen, we need the rather difficult, but entirely classical, theory of Mie scattering.

    The list goes on, but I think the point is made. All sorts of interesting and potentially useful things happen at the nanoscale, only a fraction of which depend on quantum mechanics.

    On the opposition side, the argument about the importance of quantum mechanical effects is pressed into service as a reason for anxiety; since everyone knows that quantum mechanics is mysterious and unpredictable, it must also be dangerous. I’ve commented before on the misguided use of this argument by ETC; here’s the Green Party member of the European Parliament, Caroline Lucas, writing in the Guardian: The commercial value of nanotech stems from the simple fact that the laws of physics don’t apply at the molecular level. Quantum physics kicks in, meaning the properties of materials change. This idea of the nanoscale as a lawless frontier in which anything can happen is rather attractive, but unfortunately quite untrue.

    Of course, the great attraction of quantum mechanics is all the fascinating, and usually entirely irrelevant, metaphysics that surrounds it. This provides a trap for otherwise well-informed business people to fall into, exposing themselves to the serious danger of ridicule from TNTlog, (whose author, besides being a businessman, has had the unfair advantage of having a good physics education).

    I know that it’s scientists who are to blame for this mess. Macroscopic=classical, nanoscale=quantum is such a simple and clear formula that it’s tempting for scientists communicating with the media and the public to use it even when they know it is not strictly true. But I think it’s now time to be a bit more accurate about the realities of nanoscale physics, even if this brings in a bit more complexity.

    Politics in the UK

    Some readers may have noticed that we are in the middle of an election campaign here in the UK. Unsurprisingly, science and technology have barely been mentioned at all by any of the parties, and I don’t suppose many people will be basing their voting decisions on science policy. It’s nonetheless worth commenting on the parties’ plans for science and technology.

    I discussed the Labour Party’s plans for science for the next three years here – this foresees significant real-terms increases in science funding. The Conservative Party has promised to “at least match the current administration’s spending on science, innovation and R&D”. However, the Conservative’s spending plans are predicated on finding ��35 billion in “efficiency savings”, of which ��500 million is going to come from reforming the Department of Trade and Industry’s business support programmes. I believe it is under this heading that the ��200 million support for nanotechnology discussed here comes from, so I think the status of these programmes in a Conservative administration would be far from assured. The Liberal Democrats take a simpler view of the DTI – they just plan to abolish it, and move science to the Department for Education.

    So, on fundamental science support, there seems to be a remarkable degree of consensus, with no-one seeking to roll back the substantial increases in science spending that the Labour Party has delivered. The arguments really are on the margins, about the role of government in promoting applied and near-market research in collaboration with industry. I have many very serious misgivings about the way in which the DTI has handled its support for micro- and nano- technology. In principle, though, I do think it is essential that the UK government does provide such support to businesses, if only because all other governments around the world (including, indeed perhaps especially, the USA) practise exactly this sort of interventionist policy.

    Disentangling thin polymer films

    Many of the most characteristic properties of polymer materials like plastics come from the fact that their long chain molecules get tangled up. Entanglements between different polymer chains behave like knots, which make a polymer liquid behave like a solid over quite perceptible time scales, just like silly putty. The results of a new experiment show that when you make the polymer film very thin – thinner than an individual polymer molecule – the chains become less entangled with each other, with significant effects on their mechanical properties.

    The experiments are published in this weeks Physical Review Letters; I’m a co-author but the main credit lies with my colleagues Lun Si, Mike Massa and Kari Dalnoki-Veress at McMaster University, Canada. The abstract is here, and you can download the full paper as a PDF (this paper is copyright the American Physical Society and is available here under the author rights policy of the APS).

    This is the latest in a whole series of discoveries of ways in which the properties of polymer films dramatically change when their thicknesses fall towards 10 nm and below. Another example is the discovery that the glass transition temperature of polymer films – the temperature at which a polymer like polystyrene changes from a glassy solid to a gooey liquid – dramatically decreases in thin films. So a material that would in the bulk be a rigid solid may, in a thin enough film, turn into a much less rigid, liquid-like layer (see this technical presentation for more details). Why does this matter? Well, one reason is that, as feature sizes in the microelectronics industry fall below 100 nm, the sharpness with which one can define a line in a thin film of a polymer resist could limit the perfection of the features one is making. So the fact that the mechanical properties of the polymer themselves change, purely as a function of size, could lead to problems.

    Nanoscience, small science, and big science

    Quite apart from the obvious pun, it’s tempting to think of nanoscience as typical small science. Most of the big advances are made by small groups working in universities on research programs devised, not by vast committees, but by the individual professors who write the grant applications. Equipment is often quite cheap, by scientific standards – a state of the art atomic force microscope might cost $200,000, and doesn’t need a great deal of expensive infrastructure to house it. If you have the expertise and the manpower, but not the money, you could build one yourself for perhaps a tenth of this price or less. This is an attractive option for scientists in developing countries, and this is one reason why nanoscience has become such a popular field in countries like India and China. It’s all very different from the huge and expensive multinational collaborations that are necessary for progress in particle physics, where a single experiment may involve hundreds of scientists and hundreds of millions of dollars – the archetype of big science.

    Big science does impact on the nanoworld, though. Techniques that use the highly intense beams of x-rays obtained from synchrotron sources like the ESRF at Grenoble, France, and the APS, on the outskirts of Chicago, USA, have been vital in determining the structure, at the atomic level, of the complex and efficient nanomachines of cell biology. Neutron beams, too, are unique probes of the structure and dynamics of nanoscale objects like macromolecules. To get a beam of neutrons intense enough for this kind of structure, you either need a research reactor, like the one at the Institut Laue-Langevin, in Grenoble (at which I am writing this), or a spallation source, such as ISIS, near Oxford in the UK. This latter consists of a high energy synchrotron, of the kind developed for particle physics, which smashes pulses of protons into a heavy metal target, producing showers of neutrons.

    Synchrotron and neutron sources are run on a time-sharing basis; individual groups apply for time on a particular instrument, and the best applications are allocated a few days of (rather frenetic) experimentation. So in this sense, even these techniques have the character of small science. But the facilities themselves are expensive – the world’s most advanced spallation source for neutrons, the SNS currently being built in Oak Ridge, TN, USA, will cost more than $1.4 billion, and the Japanese source J-PARC, a few years behind SNS, has a budget of $1.8 billion. With this big money comes real politics. How do you set the priorities for the science that is going to be done, not next year, but in ten years time? Do you emphasise the incremental research that you are certain will produce results, or do you gamble on untested ideas that just might produce a spectacular pay-off? This is the sort of rather difficult and uncomfortable discussion I’ve been involved in for the last couple of days – I’m on the Scientific Council of ILL, which has just been having one of its twice yearly meetings.

    New book on Nanoscale Science and Technology

    Nanoscale Science and Technology is a new, graduate level interdisciplinary textbook which has just been published by Wiley. It’s based on the Masters Course in Nanoscale Science and Technology that we run jointly between the Universities of Leeds and Sheffield.

    Nanoscale Science and Technology Book Cover

    The book covers most aspects of modern nanoscale science and technology. It ranges from “hard” nanotechnologies, like the semiconductor nanotechnologies that underly applications like quantum dot lasers, and applications of nanomagnetism like giant magnetoresistance read-heads, via semiconducting polymers and molecular electronics, through to “soft” nanotechnologies such as self-assembling systems and bio-nanotechnology. I co-wrote a couple of chapters, but the heaviest work was done by my colleagues Mark Geoghegan, at Sheffield, and Ian Hamley and Rob Kelsall at Leeds, who, as editors, have done a great job of knitting together the contributions of a number of authors with different backgrounds to make a coherent whole.

    Another ��200 million for nanotechnology in the UK

    The UK government announced yesterday ��200 million (US$380 million) of funding for nanotechnology over the next three years. The announcement came rather buried in yesterday’s press release accompanying the details of the breakdown of the science allocations from the 2004 – 2008 Comprehensive Spending Review.

    There are a couple of caveats to be born in mind when interpreting this figure. Firstly, as the precise wording is “Raising total DTI investment in nanotechnology research to ��200 million” we should probably assume that the ��200m isn’t in addition to the ��90m or so already announced – the new money is thus in the region of ��110m. Secondly, this is only the spend on nanotechnology directly controlled by the Department of Trade and Industry. Most academic nanoscience is still supported by the research councils, particularly EPSRC (whose roughly ��0.5 billion annual budget sees healthy rises over the next few years, though these probably won’t be translated into a lot of new science).

    I can’t say I look at this story without mixed feelings. It isn’t clear to me that the DTI has got its act together about its nanotechnology program; the money spent so far seems to be on very short term, rather niche, applications. The definition they give in the press release doesn’t inspire confidence that they have much of a long term vision: “Nanotechnology is the science of minute particles. Nanotechnology manipulates and controls these particles to create structures with unique properties, and promises advances in manufacturing, medicine and computing. Potential applications include medical dressing that kill off microbes, stain-free fabrics that repel liquids and self-cleaning windows.”

    Anyone seen my plutonium?

    The news that the UK nuclear reprocessing plant at Sellafield has ‘lost’ 29.6 kg of plutonium has been accompanied by much emphasis that this doesn’t mean that the stuff has physically gone missing. It’s simply an accounting shortfall, we are reassured, and a leader in the Times on the subject is notable for being probably the most scientifically literate editorial I’ve seen in a major newspaper for some time. Nonetheless, there is a real issue here, though it’s not related to fears of nuclear terrorism. The British Nuclear Group spokesperson is reported as saying “There is no suggestion that any material has left the site. When you have got a complicated chemical procedure, quite often material remains in the plant.” In other words, in all the complex and messy operations that are involved in nuclear reprocessing, some of the plutonium is not recovered, and remains in dilute solution in waste solvent. And in that form it’s potentially another small addition to the vast tanks of radioactive soup that form such a noxious legacy of the cold war nuclear programs in the UK, USA and the former Soviet Union.

    Can nanotechnology help? The idea of a fleet of nanoscale submarines making their way through the sludge pools, picking out the radioactive isotopes and concentrating them into small volumes of high level waste which could then be safely managed, is an attractive one. Even more attractive is the idea that you could pay for the whole operation by recovering the highly valuable precious metals whose presence in nuclear waste is so tantalising. Is this notion ridiculously far-fetched? I’m not so sure that it is.

    A very interesting technology that gives us a flavour of what is possible has been developed at Pacific Northwest National Laboratory. Nanoporous materials, with a very high specific surface area, are made using self-assembled surfactant nanostructures as templates. This huge internal surface area is then coated with a layer of molecules a single molecule thick; functional groups on the end of each of these molecules are designed to selectively bind a heavy metal ion. Such SAMMS – self-assembled monolayers on mesoporous supports – have been designed to selectively bind toxic heavy metals, like lead and mercury, precious metals like gold and platinum, and radioactive actinides like neptunium and plutonium, and they seem to work very effectively. Applications in areas like environmental clean-up and mining are obvious, in addition to applications to nuclear processing and clean-up.

    The irresistible rise of nano-gizmology

    What would happen if nanotechnology suddenly went out of fashion in the academic world, all the big nano-funding initiatives dried up, and putting the nano word in grant applications doomed them to certain failure? Would all the scientists who currently label themselves nanoscientists just go back to being chemists and physicists as before? This interesting question was posed to me on Monday during a fascinating afternoon seminar and discussion with social scientists from the Institute for Environment, Philosophy and Public Policy at Lancaster University.

    My first reaction was to say that nothing would change. Scientists can be a cynical bunch when it comes to funding, and it’s tempting to assume that they would just relabel their work yet again to conform with whatever the new fashion was, and carry on just as before. But on reflection my answer was that the rise of nanoscience and nanotechnology as a label in academic science has been accompanied by two real and lasting cultural changes. The first is so well-rehearsed that it’s a cliché to say it, but it is nonetheless true – nanoscientists really have got used to interdisciplinary working in a way that was very rare in academia twenty years ago (of course, it has always been the rule in industry). The second change is less obvious, though I think I first noticed it as a marked change six or seven years ago. This was a shift in emphasis away from testing theories and characterising materials towards making widgets or gizmos – things that, although usually still far away from being a real, viable product, did something or produced some functional effect. More than any use of the label “nano”, this seems to me to be a lasting change in the way scientists judge the value of their own and other peoples’s work; it’s certainly very much reflected in the editorial policies of the glamour journals like Nature and Science. Some will mourn the eclipse of the values of pure science values, while others will anticipate a more direct economic return on our societies’s investments in science as a result, but it remains to be seen what the overall outcome of this shift will be.

    Renewable energy and incremental nanotechnology

    Over the next fifty years, mankind is going to have to find large-scale primary energy sources that aren’t based on fossil fuels. Even if stocks of oil and gas don’t start to run out, the effects of man-made global warming are likely to become so pressing that the most die-hard climate-change sceptics will begin to change their tune. Meanwhile, the inhabitants of the rapidly developing countries of Asia will demand western-style standards of living, which in turn will demand western levels of energy use. Can nanotechnology help deliver the energy needed for all the world to have a decent standard of living on a sustainable basis?

    Although wind and hydroelectric energy can make significant dents in total energy requirements, it seems that only two non-fossil primary energy sources really have the potential to replace fossil fuels completely. These are nuclear fission and photovoltaics (solar cells). Nuclear power has well known problems, though there have been recent signs of a change of heart by some environmentalists, notably James Lovelock, about this. Solar power is viable, in the sense that enough sunlight falls on the earth to meet all our needs, but the capital expense of current solar cell technology is too great for it to be economically viable, except in areas remote from the electricity grid.

    To make a dent in the world’s total power needs we’re talking about bringing in many gigawatts (GW) of capacity per year (total electricity generating capacity in the UK was around 70 GW in 2002, in the USA it was 905 GW). Roughly speaking 65 million square meters (i.e. 65 square kilometers) of a moderately efficient photovoltaic gives you a GW of power. Here we see the problem of conventional silicon solar cells: a silicon wafer production plant with a 30 cm wafer process produces only 88,000 square meters a year; the cost is high and so is the energy intensity of the process, to the extent that it takes about 4 years to pay back the energy used in manufacture. We need to be able to make solar cells on a continuous basis, using a roll-to-roll process, more like a high volume printing press. A typical printing press takes just a few hours to process the same area of material as a silicon plant does in a year; at this rate we’re approaching the possibility of being able to make a GW’s worth of solar cells (roughly comparable to the output of a nuclear power station) from a year’s output from one production line. Several new technologies based on incremental nanotechnology promise to give us solar cells made by just this sort of cheap, large scale, low energy manufacturing process.

    The most famous, and probably best developed technology is the Graetzel cell, invented by Michael Graetzel of the EPFL, Lausanne. This relies on nanostructured titanium dioxide whose surfaces are coated by a dye; the nanoparticles are then embedded in a polymer electrolyte to make a thin film which can be coated onto a plastic sheet. This process is being commercialised by a number of companies, including Konarka and Sustainable Technologies International. Other technologies use nanostructured forms of different kinds of semiconductors; companies involved include Nanosys, Nanosolar, and Solaris. A third class of non-conventional photovoltaics uses semiconducting polymers of the kind used in polymer light emitting diode displays, sometimes in conjunction with fullerenes. These technologies still need to make improvements to their efficiencies and lifetimes to be fully viable, but progress is rapid, and all offer the crucial benefit of low energy, large scale manufacturability.

    It’s not at all clear which of these technologies will be the first to deliver the promised benefits. We shouldn’t forget that more conventional technologies, like thin film amorphous silicon, are also advancing fast – Unisolar has a commercial reel-to-reel process for producing this type of solar cell in quantity, with a projected annual production of 30 MW (i.e. 3% of a nuclear power station) coming soon. But it does seem as though this is one area where incremental nanotechnology could have a transformational and positive effect on the economy and the environment.

    This discussion draws on two recent articles: Manufacturing and commercialization issues in organic electronics, by J.R. Sheats, Journal of Materials Research 19 1974 (2004), and Organic photovoltaics: technology and market”, by C.J. Brabec, Solar Energy Materials and Solar Cells, 83 273 (2004).