Mobility at the surface of polymer glasses

Hard, transparent plastics like plexiglass, polycarbonate and polystyrene resemble glasses, and technically that’s what they are – a state of matter that has a liquid-like lack of regular order at the molecular scale, but which still displays the rigidity and lack of ability to flow that we expect from a solid. In the glassy state the polymer molecules are locked into position, unable to slide past one another. If we heat these materials up, they have a relatively sharp transition into a (rather sticky and viscous) liquid state; for both plexiglass and polystyrene this happens around 100 °C, as you can test for yourself by putting a plastic ruler or a (polystyrene) yoghourt pot or plastic cup into a hot oven. But, things are different at the surface, as shown by a paper in this week’s Science (abstract, subscription needed for full paper; see also commentary by John Dutcher and Mark Ediger). The paper, by grad student Zahra Fakhraai and Jamie Forrest, from the University of Waterloo in Canada, demonstrates that nanoscale indentations in the surface of a glassy polymer smooth themselves out at a rate that shows that the molecules near the surface can move around much more easily than those in the bulk.

This is a question that I’ve been interested in for a long time – in 1994 I was the co-author (with Rachel Cory and Joe Keddie) of a paper that suggested that this was the case – Size dependent depression of the glass transition temperature in polymer films (Europhysics Letters, 27 p 59). It was actually a rather practical question that prompted me to think along these lines; at the time I was a relatively new lecturer at Cambridge University, and I had a certain amount of support from the chemical company ICI. One of their scientists, Peter Mills, was talking to me about problems they had making films of PET (whose tradenames are Melinex or Mylar) – this is a glassy polymer at room temperature, but sometimes the sheet would stick to itself when it was rolled up after manufacturing. This is very hard to understand if one assumes that the molecules in a glassy polymer aren’t free to move, as to get significant adhesion between polymers one generally needs the string-like polymers to mix themselves up enough at the surface to get tangled up. Could it be that the chains at the surface had more freedom to move?

We didn’t know how to measure chain mobility directly near a surface, but I did think we could measure the glass transition temperature of a very thin film of polymer. When you heat up a polymer glass, it expands, and at the transition point where it turns into a liquid, there’s a jump in the value of the expansion coefficient. So if you heated up a very thin film, and measured its thickness you’d see the transition as a change in slope of the plot of thickness against temperature. We had available to us a very sensitive thickness measuring technique called ellipsometry, so I thought it was worth a try to do the measurement – if the chains were more free to move at the surface than in the bulk, then we’d expect the transition temperature to decrease as we looked at very thin films, where the surface had a disproportionate effect.

I proposed the idea as a final year project for the physics undergraduates, and a student called Rachel Cory chose it. Rachel was a very able experimentalist, and when she’d got the hang of the equipment she was able to make the successive thickness measurements with a resolution of a fraction of an Ångstrom, as would be needed to see the effect. But early in the new year of 1993 she came to see me to say that the leukemia from which she had been in remission had returned, that no further treatment was possible, but that she was determined to carry on with her studies. She continued to come into the lab to do experiments, obviously getting much sicker and weaker every day, but nonetheless it was a terrible shock when her mother came into the lab on the last day of term to say that Rachel’s fight was over, but that she’d been anxious for me to see the results of her experiments.

Looking through the lab book Rachel’s mother brought in, it was clear that she’d succeeded in making five or six good experimental runs, with films substantially thinner than 100 nm showing clear transitions, and that for the very thinnest films the transition temperatures did indeed seem to be significantly reduced. Joe Keddie, a very gifted young American scientist then working with me as a postdoc, (he’s now a Reader at the University of Surrey) had been helping Rachel with the measurements and followed up these early results with a large-scale set of experiments that showed the effect, to my mind, beyond doubt.

Despite our view that the results were unequivocal, they attracted quite a lot of controversy. A US group made measurements that seemed to contradict ours, and in the absence of any theoretical explanation of them there were many doubters. But by the year 2000, many other groups had repeated our work, and the weight of evidence was overwhelming that the influence of free surfaces led to a decrease in the temperature at which the material changed from being a glass to being a liquid in films less than 10 nm or so in thickness.

But this still wasn’t direct evidence that the chains near the surface were more free to move than they were in the bulk, and this direct evidence proved difficult to obtain. In the last few years a number of groups have produced stronger and stronger evidence that this is the case; Jamie and Zahra’s paper I think nails the final uncertainties, proving that polymer chains in the top few nanometers of a polymer glass really are free to move. Among the consequences of this are that we can’t necessarily predict the behaviour of polymer nanostructures on the basis of their bulk properties; this is going to become more relevant as people try and make smaller and smaller features in polymer resists, for example. What we don’t have now is a complete theoretical understanding of why this should be the case.

Nobels, Nanoscience and Nanotechnology

It’s interesting to see how various newspapers have reported the story of yesterday’s award of the physics Nobel prize to the discoverers of giant magnetoresistance (GMR). Most have picked up on the phrase used in the press release of the Nobel foundation, that this was “one of the first real applications of the promising field of nanotechnology”. Of course, this begs the question of what’s in all those things listed in the various databases of nanotechnology products, such as the famous sunscreens and stain-resistant fabrics.

References to iPods are compulsory, and this is entirely appropriate. It is quite clear that GMR is directly responsible for making possible the miniaturised hard disk drives on which entirely new product categories, such as hard disk MP3 players and digital video recorders, depend. The more informed papers (notably the Financial Times and the New York Times) have noticed that one name was missing from the award – Stuart Parkin – a physicist working for IBM in Almaden, in California, who was arguably the person who took the basic discovery of GMR and did the demanding science and technology needed to make a product out of it.

The Nobel Prize for Chemistry announced today also highlights the relationship between nanoscience and nanotechnology. It went to Gerhard Ertl, of the Fritz-Haber-Institut in Berlin, for his contributions to surface chemistry. In particular, using the powerful tools of nanoscale surface science, he was able to elucidate the fundamental mechanisms operating in catalysis. For example, he worked out the basic steps of the Haber-Bosch process. A large proportion of the world’s population quite literally depends for their lives on the Haber-Bosch process, which artificially fixes nitrogen from the atmosphere to make the fertilizer on which the high crop yields that feed the world depend.

The two prizes illustrate the complexity of the interaction between science and technology. In the case of GMR, the discovery was one that came out of fundamental solid state physics. This illustrates how what might seem to the scientists involved to be very far removed from applications can, if the effect turns out to be useful, be very quickly be exploited in products (though the science and technology needed to make this transition will itself often be highly demanding, and is perhaps not always appreciated enough). The surface science rewarded in the chemistry prize, by contrast, represents a case in which science is used, not to discover new effects or processes, but to understand better a process that is already technologically hugely important. This knowledge, in turn, can then underpin improvements to the process or the development of new, but analogous, processes.

Three good reasons to do nanotechnology: 2. For healthcare and medical applications

Part 1 of this series of posts dealt with applications of nanotechnology for sustainable energy. Here I go on to describe why so many people are excited about the possibilities for applying nanotechnology in medicine and healthcare.

It should be no surprise that medical applications of nanotechnology are very prominent in many people’s research agenda. Despite near universal agreement about the desirablility of more medical research, though, there are some tensions in the different visions people have of future nanomedicine. To the general public the driving force is often the very personal experience most people have of illness in themselves or people close to them, and there’s a lot of public support for more work aimed at the well known killers of western world, such as cardiovascular disease, cancer, and degenerative diseases like Alzheimer’s and Parkinson’s. Economic factors, though, are important for those responsible for supplying healthcare, whether that’s the government or a private sector insurer. Maybe it’s a slight exaggeration to say that the policy makers’ ideal would be for people to live in perfect health until they were 85 and then tidily drop dead, but it’s certainly true that the prospect of an ageing population demanding more and more expensive nursing care is one that is exercising policy-makers in a number of prosperous countries. In the developing world, there are many essentially political and economic issues which stand in the way of people being able to enjoy the levels of health we take for granted in Europe and the USA, and matters like the universal provision of clean water are very important. Important though the politics of public health is, the diseases that blight developing world, such as AIDS, tuberculosis and malaria, still present major science challenges. Finally, back in the richest countries of the world, there’s a climate of higher expectations of medicine, where people look to medicine to do more than to fix obvious physical ailments, and to move into the realm of human enhancement and prolonging of life beyond what might formerly be regarded as a “natural” lifespan.

So how can nanotechnology help? There are three broad areas.

1. Therapeutic applications of nanotechnology. An important area of focus for medical applications of nanotechnology has been in the area of drug delivery. This begins from the observation that when a patient takes a conventionally delivered drug, an overwhelmingly large proportion of the administered drug molecules don’t end up acting on the biological systems that they are designed to affect. This is a serious problem if the drug has side effects; the larger the dose that has to be administered to be sure that some of the molecule actually gets to the place where it is needed, the worse these side-effects will be. This is particularly obvious, and harrowing, for the intrinsically toxic molecules the drugs used for cancer chemotherapy. Another important driving force for improving delivery mechanisms is the fact that, rather than the simple and relatively robust small molecules that have been the main active ingredients in drugs to date, we are turning increasingly to biological molecules like proteins (such as monoclonal antibodies) and nucleic acids (for example, DNA for gene therapy and small interfering RNAs). These allow very specific interventions into biological processes, but the molecules are delicate, and are easily recognised and destroyed in the body. To deliver a drug, current approaches include attaching it to a large water soluble polymer molecule which is essentially invisible to the body, or wrapping it up in a self-assembled nanoscale bag – a liposome – formed from soap like molecules like phospholipids or block copolymers. Attaching the drug to a dendrimer – a nanoscale treelike structure which may have a cavity in its centre – is conceptually midway between these two approaches. The current examples of drug delivery devices that have made it into clinical use are fairly crude, but future generations of drug delivery vehicles can be expected to include “stealth” coatings to make them less visible to the body, mechanisms for targeting them to their destination tissue or organ and mechanisms for releasing their payload when they get there. They may also incorporate systems for reporting their progress back to the outside world, even if this is only the passive device of containing some agent that shows up strongly in a medical scanner.

Another area of therapeutics in which nanotechnology can make an impact is in tissue engineering and regenerative medicine. Here it’s not so much a question of making artificial substitutes for tissues or organs; ideally it would be in providing the environment in which a patient’s own cells would develop in such a way as to generate new tissue. This is a question of persuading those cells to differentiate to take up the specialised form of a particular organ. Our cells are social organisms, which respond to chemical and physical signals as they develop and differentiate to produce tissues and organs, and the role of nanotechnology here is to provide an environment (or scaffold) which gives the cells the right physical and chemical signals. Once again, self-assembly is one way forward here, providing soft gels which can be tagged with the right chemical signals to persuade the cells to do the right thing.

2. Diagnostics. Many disease states manifest themselves by the presence of specific molecules, so the ability to detect and identify these molecules quickly and reliably, even when they are present at very low concentrations, would be very helpful for the rapid diagnosis of many different conditions. The relevance of nanotechnology is that many of the most sensitive ways of detecting molecules rely on interactions between the molecule and a specially prepared surface; the much greater importance of the surface relative to the bulk for nanostructured materials makes it possible to make sensors of great sensitivity. Sensors for the levels of relatively simple chemicals, such as glucose or thyroxine, could be integrated with devices that release the chemicals needed to rectify any imbalances (these integrated devices go by the dreadful neologism of “theranostics”); recognising pathogens by recognising stretches of DNA would give a powerful way of identifying infectious diseases without the necessity for time-consuming and expensive culturing steps. One obvious and much pursued goal would be to find a way of reading, at a single molecule level, a whole DNA sequence, making it possible cheaply to obtain an individual’s whole genome.

3. Innovation and biomedical research. A contrarian point of view, which I’ve heard frequently and forcibly expressed by a senior figure from the UK’s pharmaceutical industry, is that the emphasis in nanomedicine on drug delivery is misguided, because fundamentally what it represents is an attempt to rescue bad drug candidates. In this view the place to apply nanotechnology is the drug discovery process itself. It’s a cause for concern for the industry that it seems to be getting harder and more expensive to find new drug candidates, and the hopes that were pinned a few years ago on the use of large scale combinatorial methods don’t seem to be working out. In this view, there should be a move away from these brute force approaches to more rational methods, but this time informed by the very detailed insights into cell biology offered by the single molecule methods of bionanotechnology.

Nanotechnology and visions of the future (part 1)

Earlier this year I was asked to write an article explaining nanotechnology and the debates surrounding it for a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. Here is the unedited version, in installments. Regular readers of the blog will be familiar with most of the arguments already, but I hope they will find it interesting to see it all in one place.

Introduction

Few new technologies have been accompanied by such expansive promises of their potential to change the world as nanotechnology. For some, it will lead to a utopia, in which material want has been abolished and disease is a thing of the past, while others see apocalypse and even the extinction of the human race. Governments and multinationals round the world see nanotechnology as an engine of economic growth, while campaigning groups foresee environmental degradation and a widening of the gap between the rich and poor. But at the heart of these arguments lies a striking lack of consensus about what the technology is or will be, what it will make possible and what its dangers might be. Technologies don’t exist or develop in a vacuum, and nanotechnology is no exception; arguments about the likely, or indeed desirable, trajectory of the technology are as much about their protagonists’ broader aspirations for society as about nanotechnology itself.

Possibilities

Nanotechnology is not a single technology in the way that nuclear technology, agricultural biotechnology, or semiconductor technology are. There is, as yet, no distinctive class of artefacts that can be unambiguously labelled as the product of nanotechnology. It is still, by and large, an activity carried out in laboratories rather than factories, yet the distinctive output of nanotechnology is the production and characterisation of some kind of device, rather than the kind of furthering of fundamental understanding that we would expect from a classical discipline such as physics or chemistry.

What unites the rather disparate group of applied sciences that are referred to as nanotechnologies is simply the length-scale on which they operate. Nanotechnology concerns the creation and manipulation of objects whose size lies somewhere between a nanometer and a few hundred nanometers. To put these numbers in context, it’s worth remembering that as unaided humans, we operate over a range of length-scales that spans a factor of a thousand or so, which we could call the macroscale. Thus the largest objects we can manipulate unaided are about a meter or so in size, while the smallest objects we can manipulate comfortably are about one milimeter. With the aid of light microscopes and tools for micromanipulation, we can also operate on another set of smaller lengthscales, which also spans a factor of a thousand. The upper end of the microscale is thus defined by a millimetre, while the lower end is defined by objects about a micron in size. This is roughly the size of a red blood cell or a typical bacteria, and is about the smallest object that can be easily discerned in a light microscope.

The nanoscale is smaller yet. A micron is one thousand nanometers, and one nanometer is about the size of a medium size molecule. So we can think of the lower limit of the nanoscale as being defined by the size of individual atoms and molecules, while the upper limit is defined by the resolution limits of light microscopes (this limit is somewhat more vague, and one sometimes sees apparently more exact definitions, such as 100 nm, but these in my view are entirely arbitrary).

A number of special features make operating in the nanoscale distinctive. Firstly, there is the question of the tools one needs to see nanoscale structures and to characterise them. Conventional light microscopes cannot resolve structures this small. Electron microscopes can achieve atomic resolution, but they are expensive, difficult to use and prone to artefacts. A new class of techniques – scanning probe microscopies such as scanning tunnelling microscopy and atomic force microscopy – have recently become available which can probe the nanoscale, and the uptake of these relatively cheap and accessible methods has been a big factor in creating the field of nanotechnology.

More fundamentally, the properties of matter themselves often change in interesting and unexpected ways when their dimensions are shrunk to the nanoscale. As a particle becomes smaller, it becomes proportionally more influenced by its surface, which often leads to increases in chemical reactivity. These changes may be highly desirable, yielding, for example, better catalysts for more efficiently effecting chemical transformations, or undesirable, in that they can lead to increased toxicity. Quantum mechanical effects can become important, particularly in the way electrons and light interact, and this can lead to striking and useful effects such as size dependent colour changes. (It’s worth stressing here that while quantum mechanics is counter-intuitive and somewhat mysterious to the uninitiated, it is very well understood and produces definite and quantitative predictions. One sometimes reads that “the laws of physics don’t apply at the nanoscale”. This of course is quite wrong; the laws apply just as they do on any other scale, but sometimes they have different consequences). The continuous restless activity of Brownian motion, that is the manifestation of heat energy at the nanoscale, is dominating. These differences in the way physics works at the nanoscale offer opportunities to achieve new effects, but also means that our intuitions may not always be reliable.

One further feature of the nanoscale is that it is the length scale on which the basic machinery of biology operates. Modern molecular biology and biophysics has revealed a great deal about the sub-cellular apparatus of life, revealing the structure and mode of operation of the astonishingly sophisticated molecular-scale machines that are the basis of all organisms. This is significant in a number of ways. Cell biology provides an existence proof that it is possible to make sophisticated machines on the nanoscale and it provides a model for making such machines. It even provides a toolkit of components that can be isolated from living cells and reassembled in synthetic contexts – this is the enterprise of bionanotechnology. The correspondence of length scales also brings hope that nanotechnology will make it possible to make very specific and targeted interventions into biological systems, leading, it is hoped, to new and powerful methods for medical diagnostics and therapeutics.

Nanotechnology, then, is an eclectic mix of disciplines, including elements of chemistry, physics, materials science, electrical engineering, biology and biotechnology. The way this new discipline has emerged from many existing disciplines is itself very interesting, as it illustrates an evolution of the way science is organised and practised that has occurred largely in response to external events.

The founding myth of nanotechnology places its origin in a lecture given by the American physicist Richard Feynman in 1959, published in 1960 under the title “There’s plenty of room at the bottom”. This didn’t explicitly use the word nanotechnology, but it expressed in visionary and exciting terms the many technical possibilities that would open up if one was able to manipulate matter and make engineering devices on the nanoscale. This lecture is widely invoked by enthusiasts for nanotechnology of all types as laying down the fundamental challenges of the subject, its importance endorsed by the iconic status of Feynman as perhaps the greatest native-born American physicist. However, it seems that the identification of this lecture as a foundational document is retrospective, as there is not much evidence that it made a great deal of impact at the time. Feynman himself did not devote very much further work to these ideas, and the paper was rarely cited until the 1990s.

The word nanotechnology itself was coined by the Japanese scientist Norio Taniguchi in 1974 in the context of ultra-high precision machining. However, the writer who unquestionably propelled the word and the idea into the mainstream was K. Eric Drexler. Drexler wrote a popular and bestselling book “Engines of Creation”, published in 1986, which launched a futuristic and radical vision of a nanotechnology that transformed all aspects of society. In Drexler’s vision, which explicitly invoked Feynman’s lecture, tiny assemblers would be able to take apart and put together any type of matter atom by atom. It would be possible to make any kind of product or artefact from its component atoms at virtually no cost, leading to the end of scarcity, and possibly the end of the money economy. Medicine would be revolutionised; tiny robots would be able to repair the damage caused by illness or injury at the level of individual molecules and individual cells. This could lead to the effective abolition of ageing and death, while a seamless integration of physical and cognitive prostheses would lead to new kinds of enhanced humans. On the downside, free-living, self-replicating assemblers could escape into the wild, outcompete natural life-forms by virtue of their superior materials and design, and transform the earth’s ecosphere into “grey goo”. Thus, in the vision of Drexler, nanotechnology was introduced as a technology of such potential power that it could lead either to the transfiguration of humanity or to its extinction.

There are some interesting and significant themes underlying this radical, “Drexlerite” conception of nanotechnology. One of them is the idea of matter as software. Implicit in Drexler’s worldview is the idea that the nature of all matter can be reduced to a set of coordinates of its constituent atoms. Just as music can be coded in digital form on a CD or MP3 file, and moving images can be reduced to a string of bits, it’s possible to imagine any object, whether an everyday tool, a priceless artwork, or even a natural product, being coded as a string of atomic coordinates. Nanotechnology, in this view, provides an interface between the software world and the physical world; an “assembler” or “nanofactory” generates an object just as a digital printer reproduces an image from its digital, software representation. It is this analogy that seems to make the Drexlerian notion of nanotechnology so attractive to the information technology community.

Predictions of what these “nanofactories” might look like have a very mechanistic feel to them. “Engines of Creation” had little in the way of technical detail supporting it, and included some imagery that felt quite organic and biological. However, following the popular success of “Engines”, Drexler developed his ideas at a more detailed level, publishing another, much more technical book in 1992, called “Nanosystems”. This develops a conception of nanotechnology as mechanical engineering shrunk to atomic dimensions, and it is in this form that the idea of nanotechnology has entered the popular consciousness through science fiction, films and video games. Perhaps the best of all these cultural representations is the science fiction novel “The Diamond Age” by Neal Stephenson, whose conscious evocation of a future shaped by a return to Victorian values rather appropriately mirrors the highly mechanical feel of Drexler’s conception of nanotechnology.

The next major development in nanotechnology was arguably political rather than visionary or scientific. In 2000, President Clinton announced a National Nanotechnology Initiative, with funding of $497 million a year. This initiative survived, and even thrived on, the change of administration in the USA, receiving further support, and funding increases from President Bush. Following this very public initiative from the USA, other governments around the world, and the EU, have similarly announced major funding programs. Perhaps the most interesting aspect of this international enthusiasm for nanotechnology at government level is the degree to which it is shared by countries outside those parts of North America, Europe and the Pacific Rim that are traditionally associated with a high intensity of research and development. India, China, Brazil, Iran and South Africa have all designated nanotechnology as a priority area, and in the case of China at least there is some evidence that their performance and output in nanotechnology is beginning to approach or surpass that of some Western countries, including the UK.

Some of the rhetoric associated with the US National Nanotechnology Initiative in its early days was reminiscent of the vision of Drexler – notably, an early document was entitled “Nanotechnology: shaping the world atom by atom”. Perhaps it was useful that such a radical vision for the world changing potential of nanotechnology was present in the background; even if it was not often explicitly invoked, neither did scientists go out of their way to refute it.

This changed in September 2001, when a special issue of the American popular science magazine “Scientific American” contained a number of contributions that were stingingly critical of the Drexler vision of nanotechnology. The most significant of these were by the Harvard nano-chemist George Whitesides, and the Rice University chemist Richard Smalley. Both argued that the Drexler vision of nanoscale machines was simply impossible on technical grounds. Smalley’s contribution was perhaps the most resonant; Smalley had won a Nobel prize for this discovery of a new form of nanoscale carbon, Buckminster fullerene[1], and so his contribution carried significant weight.

The dispute between Smalley and Drexler ran for a while longer, with a published exchange of letters, but its tone became increasingly vituperative. Nonetheless, the result has been that Drexler’s ideas have been largely discredited in both scientific and business circles. The attitude of many scientists is summed up by IBM’s Don Eigler, the first person to demonstrate the controlled manipulation of individual atoms: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them.”[2]

Drexler has thus become a very polarising figure. My own view is that this is unfortunate. I believe that Drexler and his followers have greatly underestimated the technical obstacles in the way of his vision of shrunken mechanical engineering. Drexler does deserve credit, though, for pointing out that the remarkable nanoscale machinery of cell biology does provide an existence proof that a sophisticated nanotechnology is possible. However, I think he went on to draw the wrong conclusion from this. Drexler’s position is essentially that we will be able greatly to surpass the capabilities of biological nanotechnology by using rational engineering principles, rather than the vagaries of evolution, to design these machines, and by using stiff and strong materials rather than diamond rather than the soft and floppy proteins and membranes of biology. I believe that this fails to recognise the fact that physics does look very different at the nanoscale, and that the design principles used in biology are optimised by evolution for this different environment[3]. From this, it follows that a radical nanotechnology might well be possible, but that it will look much more like biology than engineering.

Whether or in what form radical nanotechnology does turn out to be possible, much of what is currently on the market described as nanotechnology is very much more incremental in character. Products such as nano-enabled sunscreens, anti-stain fabric coatings, or “anti-ageing” creams certainly do not have anything to do with sophisticated nanoscale machines; instead they feature materials, coatings and structures which have some dimensions controlled on the nanoscale. These are useful and even potentially lucrative products, but they certainly do not represent any discontinuity with previous technology.

Between the mundane current applications of incremental nanotechnology, and the implausible speculations of the futurists, there are areas in which it is realistic to hope for substantial impacts from nanotechnology. Perhaps the biggest impacts will be seen in the three areas of energy, healthcare and information technology. It’s clear that there will be a huge emphasis in the coming years on finding new, more sustainable ways to obtain and transmit energy. Nanotechnology could make many contributions in areas like better batteries and fuel cells, but arguably its biggest impact could be in making solar energy economically viable on a large scale. The problem with conventional solar cells is not efficiency, but cost and manufacturing scalability. Plenty of solar energy lands on the earth, but the total area of conventional solar cells produced a year is orders of magnitude too small to make a significant dent in the world’s total energy budget. New types of solar cell using nanotechnology, and drawing inspiration from the natural process of photosynthesis, are in principle compatible with large area, low cast processing techniques like printing, and it’s not unrealistic to imagine this kind of solar cell being produced in huge plastic sheets at very low cost. In medicine, if the vision of cell-by-cell surgery using nanosubmarines isn’t going to happen, the prospect of the effectiveness of drugs being increased and their side-effects greatly reduced through the use of nanoscale delivery devices is much more realistic. Much more accurate and fast diagnosis of diseases is also in prospect.

One area in which nanotechnology can already be said to be present in our lives is information technology. The continuous miniaturisation of computing devices has already reached the nanoscale, and this is reflected in the growing impact of information technology on all aspects of the life of most people in the West. It’s interesting that the economic driving force for the continued development of information technologies is no longer computing in its traditional sense, but largely entertainment, through digital music players and digital imaging and video. The continual shrinking of current technologies will probably continue through the dynamic of Moore’s law for ten or fifteen years, allowing at least another hundred-fold increase in computing power. But at this point a number of limits, both physical and economic, are likely to provide serious impediments to further miniaturisation. New nanotechnologies may alter this picture in two ways. It is possible, but by no means certain, that entirely new computing concepts such as quantum computing or molecular electronics may lead to new types of computer of unprecedented power, permitting the further continuation or even acceleration of Moore’s law. On the other hand, developments in plastic electronics may make it possible to make computers that are not especially powerful, but which are very cheap or even disposable. It is this kind of development that is likely to facilitate the idea of “ubiquitous computing” or “the internet of things”, in which it is envisaged that every artefact and product incorporates a computer able to sense its surroundings and to communicate wirelessly with its neighbours. One can see that as a natural, even inevitable, development of technologies like the radio frequency identification devices (RFID) already used as “smart barcodes” by shops like Walmart, but it is clear also that some of the scenarios envisaged could lead to serious concerns about loss of privacy and, potentially, civil liberties.

[1] Nobel Prize for chemistry, 1996, shared with his Rice colleague Robert Curl and the British chemist Sir Harold Kroto, from Sussex University.
[2] Quoted by Chris Toumey in “Reading Feynman Into Nanotech: Does Nanotechnology Descend From Richard Feynman’s 1959 Talk?” (to be published).
[3] This is essentially the argument of my own book “Soft Machines: Nanotechnology and life”, R.A.L. Jones, OUP (2004).

To be continued…

Where should I go to study nanotechnology?

The following is a message from my sponsor… or at least, the institution that pays my salary…

What advice should one give to young people who wish to make a career in nanotechnology? It’s a very technical subject, so you won’t generally get very far without a good degree level grounding in the basic, underlying science and technology. There are some places where one can study for a first degree in nanotechnology, but in my opinion it’s better to obtain a good first degree in one of the basic disciplines – whether a pure science, like physics or chemistry, or an engineering specialism, like electronic engineering or materials science. Then one can broaden one’s education at the postgraduate level, to get the essential interdisciplinary skills that are vital to make progress in nanotechnology. Finally, of course, one usually needs the hands-on experience of research that most people obtain through the apprenticeship of a PhD.

In the UK, the first comprehensive, Masters-level course in Nanoscale Science and Technology was developed jointly by the Universities of Leeds and Sheffield (I was one of the founders of the course). As the subject has developed and the course has flourished, it has been expanded to offer a range of different options – the Nanotechnology Education Portfolio – nanofolio. Currently, we offer MSc courses in Nanoscale Science and Technology (the original, covering the whole gamut of nanotechnology from the soft to the hard), Nanoelectronics and nanomechanics, Nanomaterials for nanoengineering and Bionanotechnology.

The course website also has a general section of resources that we hope will be useful to anybody interested in nanotechnology, beginning with the all-important question “What is nanotechnology?” Many more resources, including images and videos, will be added to the site over the coming months.

Five challenges for nano-safety

This week’s Nature has a Commentary piece (editor’s summary here, subscription required for full article) from the great and good of nanoparticle toxicology, outlining what they believe needs to be done, in terms of research, to ensure that nanotechnology is developed safely. As they say, “fears over the possible dangers of some nanotechnologies may be exaggerated, but they are not necessarily unfounded,” and without targeted and strategic risk research public confidence could be lost and innovation held up through fear of litigation.

Their list of challenges is intended to form a framework for research over the next fifteen years; the wishlist is as follows:

  • Develop instruments to assess exposure to engineered nanomaterials in air and water, within the next 3–10 years.
  • Develop and validate methods to evaluate the toxicity of engineered nanomaterials, within the next 5–15 years.
  • Develop models for predicting the potential impact of engineered nanomaterials on the environment and human health, within the next 10 years.
  • Develop robust systems for evaluating the health and environmental impact of engineered nanomaterials over their entire life, within the next 5 years.
  • Develop strategic programmes that enable relevant risk-focused research, within the next 12 months.
  • Some might think it slightly odd that what amounts to a research proposal is being published in Nature. They give a positive view for stressing this program now. “Nanotechnology comes at an opportune time in the history of risk research. We have cautionary examples from genetically modified organisms and asbestos industries that motivate a real interest, from all stakeholders, to prevent, manage and reduce risk proactively.” Some indication of the potential downside of failing to be seen to move on this is seen in the recent results of a citizen’s jury on nanotechnology in Germany, reported today here (my thanks to Niels Boeing for bringing this to my attention). These findings seem notably more sceptical than the findings of similar processes in the UK.

    Nanotechnology and the food industry

    The use of nanotechnology in the food industry seems to be creeping up the media agenda at the moment. The Times on Saturday published an extended article by Vivienne Parry in its “Body and Soul” supplement, called Food fight on a tiny scale. As the title indicates, the piece is framed around the idea that we are about to see a rerun of the battles about genetic modification of food in the new context of nano-engineered foodstuffs. Another article appeared in the New York Times a few weeks ago: Risks of engineering a better ice cream.

    Actually, apart from the rather overdone references to a potential consumer backlash, both articles are fairly well-informed. The body of Vivienne Parry’s piece, in particular, makes it clear why nanotechnology in food presents a confusingly indistinct and diffuse target. Applications in packaging, for example in improving the resistance of plastic bottles to gas permeation, are already with us and are relatively uncontroversial. Longer ranged visions of “smart packaging” also offer potential consumer benefits, but may have downsides yet to be fully explored. More controversial, potentially, is the question of the addition of nanoscaled ingredients to food itself.

    But this issue is very problematic, simply because so much of food is made up of components which are naturally nanoscaled, and much of traditional cooking and food processing consists of manipulating this nanoscale structure. To give just one example, the traditional process of making whey cheeses like ricotta consists of persuading whey proteins like beta-lactoglobulin to form nanoparticles each containing a small number of molecules, and then getting those nanoparticles to aggregate in an open, gel structure, giving the cheese its characteristic mechanical properties. The first example in the NY Times article – controlling the fat particle size in ice cream to get richer feeling low fat ice cream – is best understood as simply an incremental development of conventional food science, which uses the instrumentation and methodology of nanoscience to better understand and control food nanostructure.

    There is, perhaps, more apparent ground for concern with food additives that are prepared in a nanoscaled form and directly added to foods. The kinds of molecules we are talking about here are molecules which add colour, flavour and aroma, and increasingly molecules which seem to confer some kind of health benefit. One example of this kind of thing is the substance lycopene, which is available from the chemical firm BASF as a dispersion of particles which are a few hundred nanometers in size. Lycopene is the naturally occurring dye molecule that makes tomatoes red, for which there is increasing evidence of health benefits (hence the unlikely sounding claim that tomato ketchup is good for you). Like many other food component molecules, it is not soluble in water, but it is soluble in fat (as anyone who has cooked an olive oil or butter based tomato sauce will know). Hence, if one wants to add it to a water based product, like a drink, one needs to disperse it very finely for it to be available to be digested.

    One can expect, then, more products of this kind, in which a nanoscaled preparation is used to deliver a water or oil soluble ingredient, often of natural origin, which on being swallowed will be processed by the digestive system in the normal way. What about the engineered nanoparticles, that are soluble in neither oil nor water, that have raised toxicity concerns in other contexts? These are typically inorganic materials, like carbon in its fullerene forms, or titanium dioxide, as used in sunscreen, or silica. Some of these inorganic materials are used in the form of micron scale particles as food additives. It is conceivable (though I don’t know of any examples) that nanoscaled versions might be used in food, and that these might fall within a regulatory gap in the current legal framework. I talked about the regulatory implications of this, in the UK, a few months ago in the context of a consultation document issued by the UK’s Food Standards Agency. The most recent research report from the UK government’s Nanotechnology Research Coordination Group reveals that the FSA has commissioned a couple of pieces of research about this, but the FSA informs me that it’s too early to say much about what these projects have found.

    I’m guessing that the media interest in this area has arisen largely from some promotional activity from the nanobusiness end of things. The consultancy Cientifica recently released a report, Nanotechnologies in the food industry, and there’s a conference in Amsterdam this week on Nano and Microtechnologies in the
    Food and Healthfood Industries
    .

    I’m on my way to London right now, to take part in a press briefing on Nanotechnology in Food at the Science Media Centre. My family seems to be interacting a lot with the press at the moment, but I don’t suppose I’ll do as well as my wife, whose activities last week provoked this classic local newspaper headline in the Derbyshire Times: School Axe Threat Fury. And people complain about scientific writing being too fond of stacked nouns.

    A brief update

    My frequency of posting has gone down in the last couple of weeks due to a combination of excessive busy-ness and a not wholly successful attempt to catch up with stuff before going on holiday. Here’s a brief overview of some of the things I would have written about if I’d had more time.

    The Nanotechnology Engagement Group (which I chair) met last week to sketch out some of the directions of its second policy report, informed in part by an excellent workshop – Terms of Engagement – held in London a few weeks ago. The workshop brought together policy-makers, practitioners of public engagement, members of the public who had been involved in public engagement events about nanotechnology, and scientists, to explore the different expectations and aspirations these different actors have, and the tensions that arise when these expectations aren’t compatible.

    The UK government’s funding body for the physical sciences, EPSRC, held a town meeting to discuss its new draft nanotechnology strategy last week. About 50 of the UKs leading nanoscientists attended; To summarise the mood of the meeting, people were pleased that EPSRC was drawing up a strategy, but they thought that the tentative plan was not nearly ambitious enough. EPSRC and its Strategic Working Group on Nanotechnology (of which I am a member) will be revising the draft strategy in line with these comments and the result should be presented to EPSRC Council for approval in October.

    The last two issues of Nature have much to interest the nanotechnologist. Nanotubes unwrapped introduces the idea of using exfoliated graphite as a reinforcing material in composites; this should produce many of the advantages that people hope for in nanotube composites (but which have not yet so far fully materialised) at much lower cost. Spintronics at the atomic level describes a very elegant experiment in which a single manganese atom is introduced as a substitutional dopant on a gallium arsenide surface using a scanning tunnelling microscope, to probe its magnetic interactions with the surroundings. This week’s issue also includes a very interesting set of review articles about microfluidics, including pieces by George Whitesides and Harold Craighead, to which there is free access.

    Rob Freitas has put together a website for his Nanofactory collaboration. Having complained on this blog before that my own critique of MNT proposals has been ignored by MNT proponents, it’s only fair for me to recognise that this site has a section about technical challenges which explicitly acknowledges such critiques with these positive words:
    “This list, which is almost certainly incomplete, parallels and incorporates the written concerns expressed in thoughtful commentaries by Philip Moriarty in 2005 and Richard Jones in 2006. We welcome these critiques and would encourage additional constructive commentary – and suggestions for additional technical challenges that we may have overlooked – along similar lines by others.”

    Finally, in a not totally unrelated development, the UKs funding council, EPSRC, will be running an Ideas Factory on the subject of Matter compilation via molecular manufacturing: reconstructing the wheel. The way this program works is that participants spend a week generating new ideas and collaborations, and at the end of it £1.45 million funding is guaranteed for the best proposals. I’ve been asked to act as the director of this activity, which should take place early in the New Year.

    Soft soaping hard matter

    Self-assembly is an elegant and scalable way of making complex nanoscale structures. But it only works for soft matter – the archetypal self-assembling systems are bars of soap and pots of hair gel; they’re soft because the energies that cause their components to stick together are comparable with the energies of thermal agitation. Is there a way of overcoming this limitation, and using self-assembly to make complex nanoscale structures from hard materials, like ceramics and (inorganic) semiconductors? There is – one can use the soft structure to template the synthesis of the harder material, so that the hard material takes up the intricate structure of the soft, self-assembled structure one starts with. It’s possible to use this templating technique to make glass-like materials, using so-called sol-gel chemistry. But up to now it’s not been possible to make templated, nanostructured elemental semiconductors like silicon or germanium. Two papers in this week’s Nature (Editor’s summary, with links to full articles and commentary, for which subscription is required) report the achievement of this goal for the case of germanium.

    To do this, the first requirement is a chemistry for synthesising germanium that works in solution at moderate temperatures. No such chemistry exists that uses water, so another solvent system is needed, together with a compatible surfactant that self-assembles in this solvent. The two papers manage to overcome these barriers in ways that are different in detail, but similar in principle. Sarah Tolbert’s group, from UCLA, uses ethylene diamine as the solvent and the cationic surfactant CTEAB (a very similar molecule to that found in some mild domestic disinfectants) to form the self-assembled nanostructures, which in their case took the form of hexagonally packed rods. Mercouri Kanatzidis’s group at Michigan State used formamide as the solvent and a somewhat different cationic surfactant (EMBHEAB). Both groups used variants of the so-called Zintl salts, in which germanium is combined with a reactive metal like potassium or magnesium.

    In both cases the germanium is disordered on the atomic scale, but with good long-ranged order on the larger length-scales, that reflects the relative perfection of the original self-assembled soapy structure. The UCLA group manage to remove the surfactant, leaving a nicely hydrogen-terminated germanium. The Michigan State group were unable to get rid of their surfactant, but on the positive side the structure they formed was the very beautiful and potentially useful gyroid phase, a high-symmetry structure (see the picture) in which both the material and the pores are continuous. Immediate uses of these structures follow from the fact that the optoelectronic properties of the material are strongly affected by its nanostructured form, and can be further changed by adsorption of matter on the semiconductor’s surfaces, offering potential sensor applications.

    the gyroid phase

    The gyroid phase, a cubic bicontinuous structure formed by some self-assembling surfactant systems. This structure has now been formed from elemental germanium using a templating process.

    Regulatory concerns about nanotechnology and food

    The UK Government’s Food Standards Agency has issued a draft report about the use of nanotechnology in food and the regulatory implications this might have. The report can be downloaded here; the draft report is now open for public consultation and comments are invited by July 14th.

    Observers could be forgiven some slight bemusement when it comes to the potential applications of nanotechnology to food, in that, entirely according to one’s definition of nanotechnology, these could encompass either almost everything or almost nothing. As the FSA says on its website: “In its widest sense, nanotechnology and nanomaterials are a natural part of food processing and conventional foods, as the characteristic properties of many foods rely upon nanometre sized components (e.g. nanoemulsions and foams).” To give just one example, the major protein component of milk – casein – is naturally present in the form of clusters of molecules tens of nanometers in size, so most of the processes of the dairy industry involve the manipulation of naturally occurring nanoparticles. On the other hand, in terms of the narrow focus that has developed at the applications end of nanotechnology on engineered nanoparticles, the current impact on food is rather small. In fact, the FSA states categorically in the report: “The Agency is not aware of any examples of manufactured nanoparticles or other nanomaterials being used in food currently sold in the UK.”

    In terms of the narrow focus on engineered nanoparticles, it is clear that there is indeed a regulatory gap at the moment. The FSA states that, if a food ingredient were to be used in a new, nanoscale form, then currently there would be no need to pass any new regulatory hurdles. However, the FSA believes that a more general protection would step in as a backstop – ” in such cases, the general safety articles of the EU Food Law Regulation (178/2002) would apply, which require that food placed on the market is not unsafe.” So, how likely is it that this situation, and subsequent problems, might arise? One needs first to look at those permitted food additives that are essentially insoluble in oil or water. These include (in the EU) some inorganic materials that have been used in nanoparticulate form in non-food contexts, including titanium dioxide, silicon dioxide, some clay-based materials, and the metals aluminium, silver and gold. Insoluble organic materials include cellulose, in both powdered and microcrystalline forms. The latter is an interesting case because it provides a precedent for regulations that do specify size limits – the FSA report states that ” The only examples in the food additives area that specifically limits the presence of small particles is the specification for microcrystalline cellulose, where the presence of small particles (< 5 microns) is limited because of uncertainties over their safety. " The FSA seems fairly confident that if necessary similar amendments could quickly be made in the case of other materials. But there remains the problem that currently there isn’t, as far as I can see, a fail-safe method by which the FSA could be alerted to the use of such nanomaterials and any problems they might cause. On the other hand, it’s not obvious to me why one might want to use these sorts of materials in a nanoparticulate form in food. Titanium dioxide, for example, is used essentially as a white pigment, so there wouldn’t be any point using it in a transparent, nanoscale form.