On my nanotechnology bookshelf

Following my recent rather negative review of a recent book on nanotechnology, a commenter asked me for some more positive recommendations about books on nanotechnology that are worth reading. So here’s a list of nanotechnology books old and new with brief comments. The only criterion for inclusion on this list is that I have a copy of the book in question; I know that there are a few obvious gaps. I’ll list them in the order in which they were published:

Engines of Creation, by K. Eric Drexler (1986). The original book which launched the idea of nanotechnology into popular consciousness, and still very much worth reading. Given the controversy that Drexler has attracted in recent years, it’s easy to forget that he’s a great writer, with a very fertile imagination. What Drexler brought to the idea of nanotechnology, which then was dominated, on the one hand by precision mechanical engineering (this is the world that the word nanotechnology, coined by Taniguchi, originally came from), and on the other by the microelectronics industry, was an appreciation of the importance of cell biology as an exemplar of nanoscale machines and devices and of ultra-precise nanoscale chemical operations.

Nanosystems: Molecular Machinery, Manufacturing, and Computation , by K. Eric Drexler (1992). This is Drexler’s technical book, outlining his particular vision of nanotechnology – “the principles of mechanical engineering applied to chemistry” – in detail. Very much in the category of books that are often cited, but seldom read – I have, though, read it, in some detail. The proponents of the Drexler vision are in the habit of dismissing any objection with the words “it’s all been worked out in ‘Nanosystems'”. This is often not actually true; despite the deliberately dry and textbook-like tone, and the many quite complex calculations (which are largely based on science that was certainly sound at the time of writing, though there are a few heroic assumptions that need to be made), many of the central designs are left as outlines, with much detail left to be filled in. My ultimate conclusion is that this approach to nanotechnology will turn out to have been a blind alley, though in the process of thinking through the advantages and disadvantages of the mechanical approach we will have learned a lot about how radical nanotechnology will need to be done.

Molecular Devices and Machines : A Journey into the Nanoworld , by Vincenzo Balzani, Alberto Credi and Margherita Venturi (2003). The most recent addition to my bookshelf, I’ve not finished reading it yet, but it’s good so far. This is a technical (and expensive) book, giving an overview of the approach to radical nanotechnology through supramolecular chemistry. This is perhaps the part of academic nanoscience that is closest to the Drexler vision, in that the explicit goal is to make molecular scale machines and devices, though the methods and philosophy are rather different from the mechanical approach. A must, if you’re fascinated by cis-trans isomerisation in azobenzene and intermolecular motions in rotaxanes (and if you’re not, you probably should be).

Bionanotechnology : Lessons from Nature, by David Goodsell (2004). I’m a great admirer of the work of David Goodsell as a writer and illustrator of modern cell biology, and this is a really good overview of the biology that provides both inspiration and raw materials for nanobiotechnology.

Soft Machines : Nanotechnology and Life, by Richard Jones (2004). Obviously I can’t comment on this, apart from to say that three years on I wouldn’t have written it substantially differently.

Nanotechnology and Homeland Security: New Weapons for New Wars , by Daniel and Mark Ratner (2004). I still resent the money I spent on this cynically titled and empty book.

Nanoscale Science and Technology, eds Rob Kelsall, Ian Hamley and Mark Geoghegan (2005). A textbook at the advanced undergraduate/postgraduate level, giving a very broad overview of modern nanoscience. I’m not really an objective commentator, as I co-wrote two of the chapters (on bionanotechnology and macromolecules at interfaces), but I like the way this book combines the hard (semiconductor nanotechnology and nanomagnetism) and the soft (self-assembly and bionano).

Nanofuture: What’s Next For Nanotechnology , by J. Storrs Hall (2005). Best thought of as an update of Engines of Creation, this is a an attractive and well-written presentation of the Drexler vision of nanotechnology. I entirely disagree with the premise, of course.

Nano-Hype: The Truth Behind the Nanotechnology Buzz, by David Berube (2006). A book, not about the science, but about nanotechnology as a social and political phenomenon. I reviewed in detail here. I’ve been referring to it quite a lot recently, and am increasingly appreciating the dry humour hidden within its rather complete historical chronicle.

The Dance of Molecules : How Nanotechnology is Changing Our Lives , by Ted Sargent (2006). Reviewed by me here, it’s probably fairly clear that I didn’t like it much.

The Nanotech Pioneers : Where Are They Taking Us?, by Steve Edwards (2006). In contrast to the previous one, I did like this book, which I can recommend as a good, insightful and fairly nanohype-free introduction to the area. I’ve written a full review of this, which will appear in “Physics World” next month (and here also, copyright permitting).

Another draft nano-taxonomy

It’s clear to most people that the term nanotechnology is almost impossibly broad, and that to be useful it needs to be broken up into subcategories. In the past I’ve distinguished between incremental nanotechnology, evolutionary nanotechnology and radical nanotechnology, on the basis of the degree of discontinuity with existing technologies. I’ve been thinking again about classifications, in the context of the EPSRC review of nanotechnology research in the UK; here one of the things we want to be able to do is to be able to classify the research that’s currently going on. In this way it will be easier to identify gaps and weaknesses. Here’s an attempt at providing such a classification. This is based partly on the classification that EPSRC developed last time it reviewed its nanotechnology portfolio, 5 years ago, and it also takes into account the discussion we had at our first meeting and a resulting draft from the EPSRC program manager, but I’ve re-ordered it in what I think is a logical way and tried to provide generic definitions for the sub-headings. Most pieces of research would, of course, fit into more than one category.

Enabling science and technology
1. Nanofabrication
Methods for making materials, devices and structures with dimensions less than 100 nm.
2. Nanocharacterisation and nanometrology
Novel techniques for characterisation, measurement and process control for dimensions less than 100 nm.
3. Nano-modelling
Theoretical and numerical techniques for predicting and understanding the behaviour of systems and processes with dimensions less than 100 nm.
4. Properties of nanomaterials
Size-dependent properties of materials that are structured on dimensions of 100 nm or below.
Devices, systems and machines
5. Bionanotechnology
The use of nanotechnology to study biological processes at the nanoscale, and the incorporation of nanoscale systems and devices of biological origin in synthetic structures.
6. Nanomedicine
The use of nanotechnology for diagnosing and treating injuries and disease.
7. Functional nanotechnology devices and machines
Nanoscale materials, systems and devices designed to carry out optical, electronic, mechanical and magnetic functions.
8. Extreme and molecular nanotechnology
Functional devices, systems and machines that operate at, and are addressable at, the level of a single molecule, a single atom, or a single electron.
Nanotechnology, the economy, and society
9. Nanomanufacturing
Issues associated with the commercial-scale production of nanomaterials, nanodevices and nanosystems.
10. Nanodesign
The interaction between individuals and society with nanotechnology. The design of products based on nanotechnology that meet human needs.
11. Nanotoxicology and the environment
Distinctive toxicological properties of nanoscaled materials; the behaviour of nanoscaled materials, structures and devices in the environment.

All comments gratefully received!

Pitching to Intel

There was some mockery of Apple in nanotech circles for branding their latest MP3 player the iPod Nano, merely, it seemed, because it was impressively thin (at least compared to my own much-loved first generation model). Rationalisations that its solid state memory was made with a 65 nm process didn’t seem to cut much ice with the sceptics. Nonetheless, what feels superficially obvious, that microelectronics companies are deeply involved with nanotechnology, both in their current products, and in their plans for the future, really is true.

This was made clear to me yesterday; I was in Newcastle, at a small meeting put together by the regional technology transfer organisation CENAMPS, in which nano academics from some northern UK Universities were pitching their intellectual wares to a delegation from Intel. Discussion ranged from near term materials science to the further reaches of quantum computing and new neuroscience-inspired, adaptive and multiply connected paradigms for computing without software.

The research needs of Intel, and other microelectronics companies, are made pretty clear by the International Semiconductor Technology Roadmap. In the near-term, what seem on the surface to be merely incremental improvements in reducing critical dimensions need to be underwritten by simultaneous improvements in all kinds of unglamorous but vital materials, like dielectrics, resists, and glues. Even to achieve their current performance, these materials are already pretty sophisticated, and to deliver ever-more demanding requirements for properties like dielectric constant and thermal expansivity will rely even more on the nanoscale control of structure of these materials. Much of this activity takes place under the radar of casual observers, because it consists of business-to-business transactions in unglamorous sounding sectors like chemicals and adhesives, but the volumes, values (and margins) are pretty substantial . Meanwhile, as their products shrink, these companies are huge and demanding consumers of nanometrology products.

In the medium term, to keep Moore’s law on track is going to demand that CMOS gets a radical makeover. Carbon nanotube transistors are a serious possibility – they’re now in the road-map – but the obstacles to integrating them in large-scale systems are formidable, and we’re only talking about a window of ten years or so to do this. And then, beyond 2020, we need to go quite beyond CMOS to something quite revolutionary, like molecular electronics or quantum computing. This is a daunting prospect, given that these technologies barely exist in the lab.

And what will be the societal and economic forces driving the development of nano-electronics twenty years out? Now, it’s the need to sell every teenager an MP3 player and a digital camera. Tomorrow, it’s going to be the end of broadcast television, and putting video-on-demand systems into every family home. By 2025, it’s most likely going to be the need to keep the ageing baby boomers out old peoples homes and hospitals and able to live independently. Robotics equipped with something much closer to real intelligence, ubiquitous sensing and continuous medical monitoring look like good bets to me.

Grey Goo won’t get you across the Valley of Death

The UK’s main funder of academic nanoscience and nanotechnology – the Engineering and Physical Science Research Council (EPSRC) – has published a report of a review of its nanotechnology portfolio held last summer. The report – released in a very low key way last November – is rather critical of the UK’s nanotechnology performance, noting that it falls below what the UK would hope for both in quality and in quantity, and recommends an urgent review of the EPSRC’s strategy in this area. This review is just getting under way (and I’m one of the academics on the working party).

Unlike many other countries, there is no dedicated nanotechnology program in the UK (the Department of Trade and Industry does have a program in micro- and nano- technology, but this is very near-term and focused on current markets and applications) . With the exception of two (small scale, by international comparisons) nanotechnology centres, at Oxford and Cambridge, nanoscience and nanotechnology proposals are judged in competition with other proposals in physics, chemistry and materials science. There’s no earmarked funding for nanotechnology, and the amount of funding given to the area is simply the aggregate of lots of decisions on individual proposals. This means, of course, that even estimating the total size of the UK’s nanotechnology spend is a difficult task that depends on a grant-by-grant judgement of what is nanotechnology and what is not.

This situation isn’t entirely bad; it probably means that the UK has been less affected by the worst excesses of academic nanohype than countries in which funding has been much more directly tied to the nanotechnology brand. But it does mean that the UK’s research in this area has lacked focus, it’s been developed without any long term strategy, and there’s been very little attempt to build research capacity in the area. Now is probably not a bad time to look ahead at where the interesting opportunities in nanotechnology will be, not next year, but in ten to fifteen years time, and try refocus academic nanoscience in a way that will create those longer term opportunities.

One of the perceptions mentioned in the report was that the quality of work was rather patchy, particularly in areas like nanomaterials, with some work of very moderate quality being done. One panelist on the theme day review memorably called this sort of research “grey goo” – work that is neither particularly exciting scientifically, but which, despite its apparent applied quality, isn’t particularly likely to be commercialised either. Everyone in government is concerned about the so-called “valley of death” – that trough in the cycle of commercialisation of a good idea which comes after the basic research has been done, but when products and revenues still seem a long way off. Much government intervention aims to get good ideas across this melodramatically named rift, but this carries a real danger. Clearly, funding high quality basic science doesn’t help you here, but there’s a horribly tempting false syllogism – that if a proposal isn’t interesting fundamental science, then it might be just the sort of innovative applied research that gets the good ideas closer to market. Well, it might be, but it’s probably more likely simply to be mediocre “sort-of-applied” work that will never yield a commercial product – it might be “grey goo”. I don’t think this is solely a UK problem – in my view every funding agency should ask themselves: ‘are we funding “grey goo” in a doomed attempt to get across the “valley of death”?’

Eat up your buckyballs (for your liver’s sake)

The discovery by Eva Oberdorster that the molecule C60, or buckminster fullerene, caused brain damage in large mouthed bass received huge publicity when it was first reported, (see here for a relatively level-headed account). This work has now become one of the main underpinning texts of the belief that there is something uniquely dangerous about nanomaterials. It’s interesting, though perhaps not surprising, that a recent article in the American Chemical Society journal Nano Letters, which reaches an exactly opposite conclusion, (abstract, subscription required for full article) has received no publicity at all.

In this work, from Fathi Moussa’s group in the Department of Pharmacy in Université Paris XI, it is shown that not only did C60 not have a toxic effect on the rats and mice it was tested on; it also protected rat’s livers from the toxic effects of carbon tetrachloride, an effect ascribed to C60’s powerful anti-oxidant properties. The paper is not reticent in its criticism of the earlier work; it ascribes the apparent toxic effects previously observed to the fact that the C60 was prepared in an organic solvent, THF, which was not completely removed when a water-suspension of C60 was prepared. In short, it was the toxic effects of THF that were affecting the unfortunate fish, not those of C60. The tone of these comments is suprisingly caustic for a peer reviewed paper, and it finishes with a note of magnificent Gallic sarcasm. Referring to reports that naturally occurring fullerenes (presumably from the soot from forest fires) have been discovered in fossil dinosaur eggs, the authors ask “we feel that it cannot be said that the C60 discovered in dinosaur eggs was the origin of the mass extinction of these animals, or was it?”

I should stress that I’m not advocating that Soft Machines readers should immediately consume a large quantity of C60 and then start abusing solvents, nor should we now assume that fullerenes are entirely safe and without potential environmental problems. But there are a couple of lessons we should draw from this. Firstly, toxicology is not necessarily easy to get right. But perhaps the most important lesson is that learning about science from press releases is very misleading. What appear to be the big breakthroughs at the time get lots of coverage, but the follow-up work, which can modify or even completely contradict the initial big story, barely gets noticed.

Nanotubes: not as perfect as one might like

Carbon nanotubes are often imagined to be structures of great perfection and regularity, but the reality is that, like virtually all materials we encounter, they will have defects – places where there’s a mistake in the crystal structure, like a missing atom or a wrongly connected bond. Defects are tremendously important in materials science, because they’re what stop materials from being anything like as strong as you would estimate they ought to be from a simple calculation. A recent paper in Nature Materials (abstract here, subscription required for full paper) provides what is, I think, the first accurate measurement of defect densities in single walled carbon nanotubes. For typical nanotubes, produced by chemical vapour deposition, one finds one defect every four microns of nanotube length.

It’s these atomic-level flaws that will, in practise, limit both the electronic and the mechanical properties of carbon nanotubes. The study, by Philip Collins and coworkers, at UC Irvine, uses a new technique for decorating the defects electrochemically. It’s not able to distinguish between different types of defects, which could include a substitutional dopant, a broken bond passivated by further chemical group or a mechanical strain or kink, as well as what is perhaps the theoretically best studied nanotube defect – the Stone–Wales defect. The latter occurs if, in a group of four hexagons of carbon, one bond is rotated, leading to two hexagons, a pentagon and a heptagon.

The figure of one defect per 4 microns of tube is, in one way, rather impressive – it translates into there being only one defect for every 10 thousand billion atoms. This is a similar level to the best quality silicon, which is pretty much the most perfect crystalline material available. But, on the other hand, given the essentially one-dimensional nature of a nanotube, it’s pretty significant, since a single defect in a length of nanotube being used in an electronic device would dramatically change its characteristics. And the presence of all these weak spots are likely to mean that it’s going to be difficult to make a macroscale nanotube cable whose strength approaches the theoretical estimates people have been making, for example in connection with the proposed space elevator.

Drexler vision endorsed by Princeton physicists (or their publicists, at least)

A recent press release, describing a paper by Princeton theoretical physicists Rechtsman, Stillinger,and Torquato, begins with the stirring words “It has been 20 years since the futurist Eric Drexler daringly predicted a new world where miniaturized robots would build things one molecule at a time. The world of nanotechnology that Drexler envisioned is beginning to come to pass….” The mention of Drexler has ensured that the release got a mention on the Foresight Institute’s blog, Nanodot, but Christine Peterson disarmingly appeals for help in understanding what on earth the release is talking about. Fair enough, in my view; whatever one thinks of the Drexler reference, this is one of the worst written press releases I’ve seen for some time.

A look at the original paper, in Physical Review Letters, (abstract here, preprint here, subscription required for published paper) gives us more of a clue. The backstory here is the fact that collections of spherical particles in the size range of tens to hundreds of nanometers can (if they’re all the same size) spontaneously self-assemble to form ordered arrays, often called “colloidal crystals”. The gem-stone opal is a natural example of this phenomenon; it’s formed from naturally occurring silica nanoparticles, and its iridescent colours are a result of light diffraction from the crystals. It is these striking optical properties that have raised research interest in synthetic analogues; for some sets of parameters it’s predicted that these materials might have an “optical bandgap” – a range of wavelengths of light that can’t get through the crystal in any direction. This would be useful, for example, in making highly efficient solid state lasers. The problem is that most systems of simple spheres form close-packed crystal structures – of the kind you get when stacking oranges. But it would be useful if one could make colloidal crystals with different structures, such as the diamond structure, which have more interesting potential optical properties. In principle one might be able to do this by tinkering with the interaction potentials between the particles. Close packed structures occur because the particles simply attract each other more and more until they touch, at which point they resist further compression. What this paper shows is that you can design potentials to produce the crystal structure you want – perhaps you need the particles to attract to each other up to a certain distance, then softly repel until they get a bit closer, and then start to attract again until they touch. This is an elegant piece of statistical mechanics. Of course, having designed the potential theoretically you still need to design a system that in practise has these properties. One can imagine how to do this in principle, perhaps by having colloids that combine a tunable surface charge with a soft polymer coating, but such a demonstration needs a lot of further experimental work.

Is this really “turning a central concept of nanotechnology on its head” ? Of course not. It’s a nice step forward in theoretical methods, but it’s absolutely in the mainstream of a well established research direction for obtaining interesting ordered structures by colloidal self-assembly. And as for the next sentence – “If the theory bears out – and it is in its infancy — it could have radical implications not just for industries like telecommunications and computers but also for our understanding of the nature of life” – I can only hope the authors are cringing as much as they should be at what their publicists have put out for them.

Updated with link to preprint Tuesday 20.50.

Understanding structure formation in thin polymer films

This month’s issue of Nature Materials has a paper from my group which provides new insight into the way structure can emerge in ultrathin polymer films by self-assembly. It’s very easy to make a very uniform polymer film with a thickness somewhere between 5 and 500 nanometers; in a process called “spin-casting” you just flood a smooth, flat substrate with a solution of the polymer in an organic solvent like toluene, and then you spin the substrate round at a couple of thousand RPM. The excess solution flies off, leaving a thin layer from which the solvent quickly evaporates. This process is used all the time in laboratories and in industry; in the semiconductor industry it’s the way in which photoresist layers are laid down. If you use, not a single polymer, but a mixture of two polymers, as the solvent is removed then the two polymers will phase separate, like oil and water. What’s interesting is that sometimes they will break up into little blobs in the plane of the film, but other times they will split into two distinct layers, each of which might only be a few tens of nanometers thick. The latter situation, sometimes called “self-stratification”, can be potentially very useful. It’s an advantage for solar cells made from semiconducting polymers to have two layers like this, and Henning Sirringhaus, from Cambridge, (whose company, Plastic Logic, is actively commercialising polymer electronics) has shown that you can make a polymer field effect transistor in which the gate dielectric layer spontaneously self-stratifies during spin-coating.

The paper (which can be downloaded as a PDF here) describes the experiments that Sasha Heriot, who is a postdoc in my group, did to try and disentangle what goes on in this complex situation. Our apparatus (which was built by my former graduate student, James Sharp, now a lecturer at Nottingham University) consists of a spin-coating machine in which a laser shines on the film as it spins; we detect both the light that is directly reflected and the pattern of light that is scattered out of the direct beam. The reflected light tells us how thick the film is at any point during the 5 seconds which the whole process takes, while the scattered light tells us about the lateral structure of the film. What we find is that after the spin-coating process starts, the film first stratifies vertically. As the solvent is removed, the interface separating the two layers becomes wavy, and this wave grows until these two layers break up, leaving the pattern of droplets that’s seen in the final film. We don’t exactly know why the interface between the two self-stratified films becomes unstable, but we suspect it’s connected to how volatile the solvent is. When we do understand this mechanism properly, we should be able to design conditions for the spin-coating to get the final structure we want.

The relevance of this is that this kind of solvent-based coating process is cheap and scalable to very large areas. The aim is to control the nanostructure of thin films of functional materials like semiconducting polymers simply by adjusting the processing conditions. We want to get the system to make itself as far as possible, rather than having to do lots of separate fabrication steps. If we can do this reliably, then this will get us closer to commercial processes for making, for example, very cheap solar cells using simple printing technology, or simple combinations of sensors and logic circuits by ink-jet printing.

Nanotechnology in the New Straits Times

My friend, colleague and collaborator from across the road in the chemistry department here at Sheffield, Tony Ryan, went to Malaysia and Singapore the week before last, and one result was this article in the New Straits Times, in which he gives a succinct summary of the current state of play in nanotechnology. He was rewarded by a mildly cross email this morning from K. Eric Drexler. Actually I think Tony’s interview is pretty fair to Drexler – he gives him a big place in the history of the subject, and on the vexed question of nanobots, he says “This popular misconception has been popularised by people who misunderstood the fantastic book Engines of Creation by K. Eric Drexler.

There was also a useful corrective to those of us worried that nanotechnology is getting overexposed. The writer describes how the article originated from a “short, balding man in the public relations industry” who said about nanotechnology that it’s “”the latest buzzword in the field of science and is making waves globally”. On the contrary, our journalist says… “Buzzword? It most certainly is not. My editor and I looked at each other and agreed that it is more a word that one hears ONLY ever so occasionally. “

Nanotube composites – deja vu all over again?

Carbon nanotubes are, in principle, about the strongest and stiffest materials we know about. The obvious way to exploit the strength and stiffness of fibrous materials like nanotubes is to use them to make a composite material, like the carbon fibre composites that are currently some of the strongest and lightest materials available for advanced applications like the aerospace industry. But the development of nanotube composites has been disappointingly slow. To quote from a recent review in Current Opinion in Solid State and Materials Science (subscription required) – Carbon nanotube polymer composites – by Andrews and Weisenberger (University of Kentucky), “after nearly a decade of research, their potential as reinforcement for polymers has not been fully realized; the mechanical properties of derived composites have fallen short of predicted values”. One of the major problems has been the tendency of nanotubes to agrregate in bundles – for a composite to work well, the reinforcing fibres need to be evenly distributed through the matrix material.

My friend and colleague from Cambridge, Athene Donald, reminds us that we’ve been here before. In an opinion piece (PDF) in the May issue of Nano Today, she recalls the enthusiasm in the early ’80s for so-called molecular composites. The idea was to take the strong, rigid polymers that were being developed at the time (of which Kevlar is the most famous), and make a composite in which dispersed, individual molecules of the rigid polymer played the role of the fibre reinforcement. Despite the expenditure of large sums of money, notably by the US Air Force, this idea didn’t go anywhere, because the forces that make rod-like molecules tend to want to bunch together are very strong and very difficult to overcome. It’s exactly the same physics that’s making it so hard to make good nanotube based composites.

Athene’s piece is about self-assembly. When so many people (including me) are writing about the huge potential for the use of self-assembly as a scalable manufacturing method in nanotechnology, it’s salutory to remember that the tendency to self-assemble can have unwelcome, as well as beneficial, effects. Matter doesn’t always do what you want it to do, particularly at the nanoscale.