Batteries and electric vehicles – disruption may come sooner than you think

How fast can electric cars take over from fossil fuelled vehicles? This partly depends on how quickly the world’s capacity for manufacturing batteries – especially the lithium-ion batteries that are currently the favoured technology for all-electric vehicles – can expand. The current world capacity for manufacturing the kind of batteries that power electric cars is 34 GWh, and, as has been widely publicised, Elon Musk plans to double this number, with Tesla’s giant battery factory currently under construction in Nevada. This joint venture with Japan’s Panasonic will bring another 35 GWh capacity on stream in the next few years. But, as a fascinating recent article in the FT makes clear (Electric cars: China’s battle for the battery market), Tesla isn’t the only player in this game. On the FT’s figures, by 2020, it’s expected that there will be a total of 174 GWh battery manufacturing capacity in the world – an increase of more than 500%. Of this, no less than 109 GWh will be in China.

What effect will this massive increase have on the markets? The demand for batteries – largely from electric vehicles – was for 11 GWh in 2015. Market penetration of electric vehicles is increasing, but it seems unlikely that demand will keep up with this huge increase in supply (one estimate projects demand in 2020 at 54 GWh). It seems inevitable that prices will fall in response to this coming glut – and batteries will end up being sold at less than the economically sustainable cost. The situation is reminiscent of what happened with silicon solar cells a few years ago – the same massive increase in manufacturing capacity, driven by China, resulting in big price falls – and the bankruptcy of many manufacturers.

This recent report (PDF) from the US’s National Renewable Energy Laboratory helpfully breaks down some of the input costs of manufacturing batteries. Costs are lower in China than the USA, but labour costs form a relatively small part of this. The two dominating costs, by far, are the materials and the cost of capital. China has the advantage in materials costs by being closer to the centre of the materials supply chains, which are based largely in Korea, Japan and China – this is where a substantial amount of the value is generated.

If the market price falls below the minimum sustainable price – as I think it must – most of the slack will be taken up by the cost of capital. Effectively, some of the huge capital costs going into these new plants will, one way or another, be written off – Tesla’s shareholders will lose even more money, and China’s opaque financial system will end up absorbing the losses. There will undoubtedly be manufacturing efficiencies to be found, and technical improvements in the materials, often arising from precise control of their nanostructure, will lead to improvements in the cost-effectiveness of the batteries. This will, in turn, accelerate the uptake of electric vehicles – possibly encouraged by strong policy steers in China especially.

Even at relatively low relative penetration of electric vehicles relative to the internal combustion energy, in plausible scenarios (see for example this analysis from Imperial College’s Grantham Centre) they may displace enough oil to have a material impact on total demand, and thus keep a lid on oil prices, perhaps even leading to a peak in oil demand as early as 2020. This will upend many of the assumptions currently being made by the oil companies.

But the dramatic fall in the cost of lithium-ion batteries that this manufacturing overcapacity will have other effects on the direction of technology development. It will create a strong force locking-in the technology of lithium-ion batteries – other types of battery will struggle to establish themselves in competition with this incumbent technology (as we have seen with alternatives to silicon photovoltaics), and technological improvements are most likely to be found in the kinds of material tweaks that can easily fit into the massive materials supply chains that are developing.

To be parochial, the UK government has just trailed funding for a national research centre for battery technology. Given the UK’s relatively small presence in this area, and its distance from the key supply chains for materials for batteries, it is going to need to be very careful to identify those places where the UK is going to be in a position to extract value. Mass manufacture of lithium ion batteries is probably not going to be one of those places.

Finally, why hasn’t John Goodenough (who has perhaps made the biggest contribution to the science of lithium-ion batteries in their current form) won the Nobel Prize for Chemistry yet?

Do materials even have genomes?

I’ve long suspected that physical scientists have occasional attacks of biology envy, so I suppose I shouldn’t be surprised that the US government announced last year the “Materials Genome Initiative for Global Competiveness”. Its aim is to “discover, develop, manufacture, and deploy advanced materials at least twice as fast as possible today, at a fraction of the cost.” There’s a genuine problem here – for people used to the rapid pace of innovation in information technology, the very slow rate at which new materials are taken up in new manufactured products is an affront. The solution proposed here is to use those very advances in information technology to boost the rate of materials innovation, just as (the rhetoric invites us to infer) the rate of progress in biology has been boosted by big data driven projects like the human genome project.

There’s no question that many big problems could be addressed by new materials. Continue reading “Do materials even have genomes?”

Can carbon capture and storage work?

Across the world, governments are placing high hopes on carbon capture and storage as the technology that will allow us to go on meeting a large proportion of the world’s growing energy needs from high carbon fossil fuels like coal. The basic technology is straightforward enough; in one variant one burns the coal as normal, and then takes the flue gases through a process to separate the carbon dioxide, which one then pipes off and shuts away in a geological reservoir, for example down an exhausted natural gas field. There are two alternatives to this simplest scheme; one can separate the oxygen from the nitrogen in the air and then burn the fuel in pure oxygen, producing nearly pure carbon dioxide for immediate disposal. Or in a process reminiscent of that used a century ago to make town gas, one can gasify coal to produce a mixture of carbon dioxide and hydrogen, remove the carbon dioxide from the mixture and burn the hydrogen. Although the technology for this all sounds straightforward enough, a rather sceptical article in last week’s Economist, Trouble in Store, points out some difficulties. The embarrassing fact is that, for all the enthusiasm from politicians, no energy utility in the world has yet built a large power plant using carbon capture and storage. The problem is purely one of cost. The extra capital cost of the plant is high, and significant amounts of energy need to be diverted to do the necessary separation processes. This puts a high (and uncertain) price on each tonne of carbon not emitted.

Can technology bring this cost down? This question was considered in a talk last week by Professor Mercedes Maroto-Valer from the University of Nottingham’s Centre for Innovation in Carbon Capture and Storage. The occasion for the talk was a meeting held last Friday to discuss environmentally beneficial applications of nanotechnology; this formed part of the consultation process about the third Grand Challenge to be funded in nanotechnology by the UK’s research council. A good primer on the basics of the process can be found in the IPCC special report on carbon capture. At the heart of any carbon capture method is always a gas separation process. This might be helped by better nanotechnology-enabled membranes, or nanoporous materials (like molecular sieve materials) that can selectively absorb and release carbon dioxide. These would need to be cheap and capable of sustaining many regeneration cycles.

This kind of technology might help by bringing the cost of carbon capture and storage down from its current rather frightening levels. I can’t help feeling, though, that carbon capture and storage will always remain a rather unsatisfactory technology for as long as its costs remain a pure overhead – thus finding something useful to do with the carbon dioxide is a hugely important step. This is another reason why I think the “methanol economy” deserves serious attention. The idea here is to use methanol as an energy carrier, for example as a transport fuel which is compatible with existing fuel distribution infrastructures and the huge installed base of internal combustion engines. A long-term goal would be to remove carbon dioxide from the atmosphere and use solar energy to convert it into methanol for use as a completely carbon-neutral transport fuel and as a feedstock for the petrochemical industry. The major research challenge here is to develop scalable systems for the photocatalytic reduction of carbon dioxide, or alternatively to do this in a biologically based system. Intermediate steps to a methanol economy might use renewably generated electricity to provide the energy for the creation of methanol from water and carbon dioxide from coal-fired power stations, extracting “one more pass” of energy from the carbon before it is released into the atmosphere. Alternatively process heat from a new generation nuclear power station could be used to generate hydrogen for the synthesis of methanol from carbon dioxide captured from a neighboring fossil fuel plant.

Nanocosmetics in the news

Uncertainties surrounding the use of nanoparticles in cosmetics made the news in the UK yesterday; this followed a press release from the consumer group Which? – Beauty must face up to nano. This is related to a forthcoming report in their magazine, in which a variety of cosmetic companies were asked about their use of nanotechnologies (I was one of the experts consulted for commentary on the results of these inquiries).

The two issues that concern Which? are some continuing uncertainties about nanoparticle safety and the fact that it hasn’t generally been made clear to consumers that nanoparticles are being used. Their head of policy, Sue Davies, emphasizes that their position isn’t blanket opposition: “We’re not saying the use of nanotechnology in cosmetics is a bad thing, far from it. Many of its applications could lead to exciting and revolutionary developments in a wide range of products, but until all the necessary safety tests are carried out, the simple fact is we just don’t know enough.” Of 67 companies approached for information about their use of nanotechnologies, only 8 replied with useful information, prompting Sue to comment: “It was concerning that so few companies came forward to be involved in our report and we are grateful for those that were responsible enough to do so. The cosmetics industry needs to stop burying its head in the sand and come clean about how it is using nanotechnology.”

On the other hand, the companies that did supply information include many of the biggest names – L’Oreal, Unilever, Nivea, Avon, Boots, Body Shop, Korres and Green People – all of whom use nanoparticulate titanium dioxide (and, in some cases, nanoparticulate zinc oxide). This makes clear just how widespread the use of these materials is (and goes someway to explaining where the estimated 130 tonnes of nanoscale titanium dioxide being consumed annually in the UK is going).

The story is surprisingly widely covered by the media (considering that yesterday was not exactly a slow news day). Many focus on the angle of lack of consumer information, including the BBC, which reports that “consumers cannot tell which products use nanomaterials as many fail to mention it”, and the Guardian, which highlights the poor response rate. The story is also covered in the Daily Telegraph, while the Daily Mail, predictably, takes a less nuanced view. Under the headline The beauty creams with nanoparticles that could poison your body, the Mail explains that “the size of the particles may allow them to permeate protective barriers in the body, such as those surrounding the brain or a developing baby in the womb.”

What are the issues here? There is, if I can put it this way, a cosmetic problem, in that there are some products on the market making claims that seem at best unwise – I’m thinking here of the claimed use of fullerenes as antioxidants in face creams. It may well be that these ingredients are present in such small quantities that there is no possibility of danger, but given the uncertainties surrounding fullerene toxicology putting products like this on the market doesn’t seem very smart, and is likely to cause reputational damage to the whole industry. There is a lot more data about nanoscale titanium dioxide, and the evidence that these particular nanoparticles aren’t able to penetrate healthy skin looks reasonably convincing. They deliver an unquestionable consumer benefit, in terms of screening out harmful UV rays, and the alternatives – organic small molecule sunscreens – are far from being above suspicion. But, as pointed out by the EU’s Scientific Committee on Consumer Products, there does remain uncertainty about the effect of titanium dioxide nanoparticles on damaged and sun-burned skin. Another issue recently highlighted by Andrew Maynard is the issue of the degree to which the action of light on TiO2 nanoparticles causes reactive and potentially damaging free radicals to be generated. This photocatalytic activity can be suppressed by the choice of crystalline structure (the rutile form of titanium dioxide should be used, rather than anatase), the introduction of dopants, and coating the surface of the nanoparticles. The research cited by Maynard makes it clear that not all sunscreens use grades of titanium dioxide that do completely suppress photocatalytic activity.

This poses a problem. Consumers don’t at present have ready access to information as to whether nanoscale titanium dioxide is used at all, let alone whether the nanoparticles in question are in the rutile or anatase form. Here, surely, is a case where if the companies following best practise provided more information, they might avoid their reputation being damaged by less careful operators.

What’s meant by “food nanotechnology”?

A couple of weeks ago I took part in a dialogue meeting in Brussels organised by the CIAA, the Confederation of the Food and Drink Industries of the EU, about nanotechnology in food. The meeting involved representatives from big food companies, from the European Commission and agencies like the European Food Safety Association, together with consumer groups like BEUC, and the campaigning group Friends of the Earth Europe. The latter group recently released a report on food nanotechnology – Out of the laboratory and on to our plates: Nanotechnology in food and agriculture; according to the press release, this “reveals that despite concerns about the toxicity risks of nanomaterials, consumers are unknowingly ingesting them because regulators are struggling to keep pace with their rapidly expanding use.” The position of the CIAA is essentially that nanotechnology is an interesting technology currently in research rather than having yet made it into products. One can get a good idea of the research agenda of the European food industry from the European Technology Platform Food for Life. As the only academic present, I tried in my contribution to clarify a little the different things people mean by “food nanotechnology”. Here, more or less, is what I said.

What makes the subject of nanotechnology particularly confusing and contentious is the ambiguity of the definition of nanotechnology when applied to food systems. Most people’s definitions are something along the lines of “the purposeful creation of structures with length scales of 100 nm or less to achieve new effects by virtue of those length-scales”. But when one attempts to apply this definition in practise one runs into difficulties, particularly for food. It’s this ambiguity that lies behind the difference of opinion we’ve heard about already today about how widespread the use of nanotechnology in foods is already. On the one hand, Friends of the Earth says they know of 104 nanofood products on the market already (and some analysts suggest the number may be more than 600). On the other hand, the CIAA (the Confederation of Food and Drink Industries of the EU) maintains that, while active research in the area is going on, no actual nanofood products are yet on the market. In fact, both parties are, in their different ways, right; the problem is the ambiguity of definition.

The issue is that food is naturally nano-structured, so that too wide a definition ends up encompassing much of modern food science, and indeed, if you stretch it further, some aspects of traditional food processing. Consider the case of “nano-ice cream”: the FoE report states that “Nestlé and Unilever are reported to be developing a nano- emulsion based ice cream with a lower fat content that retains a fatty texture and flavour”. Without knowing the details of this research, what one can be sure of is that it will involve essentially conventional food processing technology in order to control fat globule structure and size on the nanoscale. If the processing technology is conventional (and the economics of the food industry dictates that it must be), what makes this nanotechnology, if anything does, is the fact that analytical tools are available to observe the nanoscale structural changes that lead to the desirable properties. What makes this nanotechnology, then, is simply knowledge. In the light of the new knowledge that new techniques give us, we could even argue that some traditional processes, which it now turns out involve manipulation of the structure on the nanoscale to achieve some desirable effects, would constitute nanotechnology if it was defined this widely. For example, traditional whey cheeses like ricotta are made by creating the conditions for the whey proteins to aggregate into protein nanoparticles. These subsequently aggregate to form the particulate gels that give the cheese its desirable texture.

It should be clear, then, that there isn’t a single thing one can call “nanotechnology” – there are many different technologies, producing many different kinds of nano-materials. These different types of nanomaterials have quite different risk profiles. Consider cadmium selenide quantum dots, titanium dioxide nanoparticles, sheets of exfoliated clay, fullerenes like C60, casein micelles, phospholipid nanosomes – the risks and uncertainties of each of these examples of nanomaterials are quite different and it’s likely to be very misleading to generalise from any one of these to a wider class of nanomaterials.

To begin to make sense of the different types of nanomaterial that might be present in food, there is one very useful distinction. This is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are covalently bonded, and thus are persistent and generally rather robust, though they may have important surface properties such as catalysis, and they may be prone to aggregate. Examples of engineered nanoparticles include titanium dioxide nanoparticles and fullerenes.

In self-assembled nanostructures, though, molecules are held together by weak forces, such as hydrogen bonds and the hydrophobic interaction. The weakness of these forces renders them mutable and transient; examples include soap micelles, protein aggregates (for example the casein micelles formed in milk), liposomes and nanosomes and the microcapsules and nanocapsules made from biopolymers such as starch.

So what kind of food nanotechnology can we expect? Here are some potentially important areas:

• Food science at the nanoscale. This is about using a combination of fairly conventional food processing techniques supported by the use of nanoscale analytical techniques to achieve desirable properties. A major driver here will be the use of sophisticated food structuring to achieve palatable products with low fat contents.
• Encapsulating ingredients and additives. The encapsulation of flavours and aromas at the microscale to protect delicate molecules and enable their triggered or otherwise controlled release is already widespread, and it is possible that decreasing the lengthscale of these systems to the nanoscale might be advantageous in some cases. We are also likely to see a range of “nutriceutical” molecules come into more general use.
• Water dispersible preparations of fat-soluble ingredients. Many food ingredients are fat-soluble; as a way of incorporating these in food and drink without fat manufacturers have developed stable colloidal dispersions of these materials in water, with particle sizes in the range of hundreds of nanometers. For example, the substance lycopene, which is familiar as the molecule that makes tomatoes red and which is believed to offer substantial health benefits, is marketed in this form by the German company BASF.

What is important in this discussion is clarity – definitions are important. We’ve seen discrepancies between estimates of how widespread food nanotechnology is in the marketplace now, and these discrepancies lead to unnecessary misunderstanding and distrust. Clarity about what we are talking about, and a recognition of the diversity of technologies we are talking about, can help remove this misunderstanding and give us a sound basis for the sort of dialogue we’re participating in today.

Nanoparticles down the drain

With significant amounts of nanomaterials now entering markets, it’s clearly worth worrying about what’s going to happen these materials after disposal – is there any danger of them entering the environment and causing damage to ecosystems? These are the concerns of the discipline of nano-ecotoxicology; on the evidence of the conference I was at yesterday, on the Environmental effects of nanoparticles, at Birmingham, this is an expanding field.

From the range of talks and posters, there seems to be a heavy focus (at least in Europe) on those few nanomaterials which really are entering the marketplace in quantity – titanium dioxide, of sunscreen fame, and nano-silver, with some work on fullerenes. One talk, by Andrew Johnson, of the UK’s Centre for Ecology and Hydrology at Wallingford, showed nicely what the outline of a comprehensive analysis of the environmental fate of nanoparticles might look like. His estimate is that 130 tonnes of nano-titanium dioxide a year is used in sunscreens in the UK – where does this stuff ultimately go? Down the drain and into the sewers, of course, so it’s worth worrying what happens to it then.

At the sewage plant, solids are separated from the treated water, and the first thing to ask is where the titanium dioxide nanoparticles go. The evidence seems to be that a large majority end up in the sludge. Some 57% of this treated sludge is spread on farmland as fertilizer, while 21% is incinerated and 17% goes to landfill. There’s work to be done, then, in determining what happens to the nanoparticles – do they retain their nanoparticulate identity, or do they aggregate into larger clusters? One needs then to ask whether those that survive are likely to cause damage to soil microorganisms or earthworms. Johnson presented some reassuring evidence about earthworms, but there’s clearly more work to be done here.

Making a series of heroic assumptions, Johnson made some estimates of how many nanoparticles might end up in the river. Taking a worst case scenario, with a drought and heatwave in the southeast of England (they do happen, I’m old enough to remember) he came up with an estimate of 8 micrograms/litre in the Thames, which is still more than an order of magnitude less than that that has been shown to start to affect, for example, rainbow trout. This is reassuring, but, as one questioner pointed out, one still might worry about the nanoparticles accumulating in sediments to the detriment of filter feeders.

Asbestos-like toxicity of some carbon nanotubes

It has become commonplace amongst critics of nanotechnology to compare carbon nanotubes to asbestos, on the basis that they are both biopersistent, inorganic fibres with a high aspect ratio. Asbestos is linked to a number of diseases, most notably the incurable cancer mesothelioma, of which there are currently 2000 new cases a year in the UK. A paper published in Nature Nanotechnology today, from Ken Donaldson’s group at the University of Edinburgh, provides the best evidence to date that some carbon nanotubes – specifically, multi-wall nanotubes longer than 20 µm or so – do lead to the same pathogenic effects in the mesothelium as asbestos fibres.

The basis of toxicity of asbestos and other fibrous materials is now reasonably well understood; their toxicity is based on the physical nature of the materials, rather than their chemical composition. In particular, fibres are expected to be toxic if they are long – longer than about 20 µm – and rigid. The mechanism of this pathogenicity is believed to be related to frustrated phagocytosis. Phagocytes are the cells whose job it is to engulf and destroy intruders – when they detect a foreign body like a fibre, they attempt to engulf it, but are unable to complete this process if the fibre is too long and rigid. Instead they release a burst of toxic products, which have no effect on the fibre but instead cause damage to the surrounding tissues. There is every reason to expect this mechanism to be active for nanotubes which are sufficiently long and rigid.

Donaldson’s group tested the hypothesis that long carbon nanotubes would have a similar effect to asbestos by injecting nanotubes into the peritoneal cavity of mice to expose the mesothelium directly to nanotubes, and then directly monitor the response. This is a proven assay for the initial toxic effects of asbestos that subsequently lead to the cancer mesothelioma.

Four multiwall nanotube samples were studied. Two of these samples had long fibres – one was a commercial sample, from Mitsui, and another was produced in a UK academic laboratory. The other two samples had short, tangled fibres, and were commercial materials from NanoLab Inc, USA. The nanotubes were compared to two controls of long and short fibre amosite asbestos and one of non-fibrous, nanoparticulate carbon black. The two nanotube samples containing a significant fraction of long (>20 µm) nanotubes, together with the long-fibre amosite asbestos, produced a characteristic pathological response of inflammation, the production of foreign body giant cells, and the development of granulomas, a characteristic lesion. The nanotubes with short fibres, like the short fibre asbestos sample and the carbon black, produced little or no pathogenic effect. A number of other controls provide good evidence that it is indeed the physical form of the nanotubes rather than any contaminants that leads to the pathogenic effect.

The key finding, then, is that not all carbon nanotubes are equal when it comes to their toxicity. Long nanotubes produce an asbestos-like response, while short nanotubes, and particulate graphene-like materials don’t produce this response. The experiments don’t directly demonstrate the development of the cancer mesothelioma, but it would be reasonable to suppose this would be the eventual consequence of the pathogenic changes observed.

The experiments do seem to rule out a role for other possible contributing factors (presence of metallic catalyst residues), but they do not address whether other mechanisms of toxicity might be important for short nanotubes.

Most importantly, the experiments do not say anything about issues of dose and exposure. In the experiments, the nanotubes were directly injected into peritoneal cavity; to establish whether environmental or workplace exposure to nanotubes present a danger one needs to know how likely it is that realistic exposures of inhaled nanotubes would lead to enough nanotubes crossing the lungs through to the mesothelium lead to toxic effects. This is the most urgent question now waiting for further research.

It isn’t clear what proportion of the carbon nanotubes now being produced on industrial, or at least pilot plant, scale, would have the characteristics – particularly in their length – that would lead to the risk of these toxic effects. However, those nanotubes that are already in the market-place are mostly in the form of advanced composites, in which the nanotubes are tightly bound in a resin matrix, so it seems unlikely that these will pose an immediate danger. We need, with some urgency, research into what might happen to the nanotubes in such products over their whole lifecycle, including after disposal.

How can nanotechnology help solve the world’s water problems?

The lack of availability of clean water to many of the world’s population currently leads to suffering and premature death for millions of people, and as population pressures increase, climate change starts to bite, and food supplies become tighter (perhaps exacerbated by an ill-considered move to biofuels) these problems will only intensify. It’s possible that nanotechnology may be able to contribute to solving these problems (see this earlier post, for example). A couple of weeks ago, Nature magazine ran a special issue on water, which included a very helpful review article: Science and technology for water purification in the coming decades. This article (which seems to be available without subscription) is all the more helpful for not focusing specifically on nanotechnology, instead making it clear where nanotechnology could fit into other existing technologies to create affordable and workable solutions.

One sometimes hears the criticism that there’s no point worrying about the promise of new nanotechnological solutions, when workable solutions are already known but aren’t being implemented, for political or economic reasons. That’s an argument that’s not without force, but the authors do begin to address it, by outlining what’s wrong with existing technical solutions. “These treatment methods are often chemically, energetically and operationally intensive, focused on large systems, and thus require considerable infusion of capital, engineering expertise and infrastructure” Thus we should be looking for decentralised solutions, that can be easily, reliably and cheaply installed using local expertise and preferably without the need for large scale industrial infrastructure.

To start with the problem of the sterilisation of water to kill pathogens, traditional methods start with chlorine. This isn’t ideal, as some pathogens are remarkably tolerant of it, and it can lead to toxic by-products. Ultra-violet sterilisation, on the other hand, offers a lot of promise – it’s good for bacteria, though less effective for viruses. But in combination with photocatalytic surfaces of titanium dioxide nanoparticles it could be very effective. Here what is required is either much cheaper sources of ultraviolet light, (which could come from new nanostructured semiconductor light emitting diodes) or new types of nanoparticles with surfaces excited by lower wavelength light, including sunlight.

Another problem is the removal of contamination by toxic chemicals, which can arise either naturally or through pollution. Problem contaminants include heavy metals, arsenic, pesticide residues, and endocrine disrupters; the difficulty is that these can have dangerous effects even at rather low concentrations, which can’t be detected without expensive laboratory-based analysis equipment. Here methods for robust, low cost chemical sensing would be very useful – perhaps a combination of molecular recognition elements integrated in nanofluidic devices could do the job.

The reuse of waste water offers hard problems because of the high content organic matter that needs to be removed, in addition to the removal of other contaminants. Membrane bioreactors combine the use of the sorts of microbes that are exploited in activated sludge processes of conventional sewage treatment with ultrafiltration through a membrane to get faster throughputs of waste water. The tighter the pores in this sort of membrane, the more effective it is at removing suspended material, but the problem is that this sort of membrane quickly gets blocked up. One solution is to line the micro- and nano- pores of the membranes with a single layer of hairy molecules – one of the paper’s co-authors, MIT’s Anne Mayes, developed a particularly elegant scheme for doing this exploiting self-assembly of comb-shaped copolymers.

Of course, most of the water in the world is salty (97.5%, to be precise), so the ultimate solution to water shortages is desalination. Desalination costs energy – necessarily so, as the second law of thermodynamics puts a lower limit on the cost of separating pure water from the higher entropy solution state. This theoretical limit is 0.7 kWh per cubic meter, and to date the most efficient practical process uses a not at all unreasonable 4 kWh per cubic meter. Achieving these figures, and pushing them down further, is a matter of membrane engineering, achieving precisely nanostructured pores that resist fouling and yet are mechanically and chemically robust.

Carbon nanotubes as engineering fibres

Carbon nanotubes have become iconic symbols of nanotechnology, promising dramatic new breakthroughs in molecular electronics and holding out the possibility of transformational applications like the space elevator. Another perspective on these materials places them, not as a transformational new technology, but as the continuation of incremental progress in the field of high performance engineering fibres. This perhaps is a less dramatic way of positioning this emerging technology, but it may be more likely to bring economic returns in the short term and thus keep the field moving. A perspective article in the current issue of Science magazine – Making strong fibres (subscription required), by Han Gi Chae and Satish Kumar from Georgia Tech, nicely sets current achievements in developing carbon nanotube based fibres in the context of presently available high strength, high stiffness fibres such as Kevlar, Dyneema, and carbon fibres.

The basic idea underlying all these fibres is the same, and is easy to understand. Carbon-carbon covalent bonds are very strong, so if you can arrange in a fibre made from a long-chain molecule that all the molecules are aligned along the axis of the fibre, then you should end up pulling directly on the very strong carbon-carbon bonds. Kevlar is spun from a liquid crystal precursor, in which its long, rather rigid molecules spontaneously line up like pencils in a case, while Dyneema is made from very long polyethylene molecules that are physically pulled out straight during the spinning process. Carbon fibres are typically made by making a highly aligned fibre from a polymer like polyacrylonitrile, that is then charred to leave graphitic carbon in the form of bundles of sheets, like a rolled up newspaper. If you could make a perfect bundle of carbon nanotubes, all aligned along the direction of the fibre, it would be almost identical to a carbon fibre chemically, but with a state of much greater structural perfection. This idea of structural perfection is crucial. The stiffness of a material pretty much directly reflects the strength of the covalent bonds that make it up, but strength is actually a lot more complicated. In fact, what one needs to explain about most materials is not why they are as strong as they are, but why they are so weak. It is all the defects in materials – and the weak spots they lead to – which mean they rarely get even close to their ideal theoretical values. Carbon nanotubes are no different, so the projections of ultra-high strength that underlie ideas like the space elevator are still a long way off when it comes to practical fibres in real life.

But maybe we shouldn’t be disappointed by the failure of nanotubes (so far) to live up to these very high expectations, but instead compare them to existing strong fibres. This has been the approach of Cambridge’s Alan Windle, whose group probably is as far ahead as anyone in developing a practical process for making useful nanotube fibres. Their experimental rig (see this recent BBC news report for a nice description, with videos) draws a fibre out from a chemical vapour deposition furnace, essentially pulling out smoke. The resulting nanotubes are far from being the perfect tubes of the typical computer visualisation, typically looking more like dog-bones than perfect cylinders (see picture below). Their strength is a long way below the ideal values – but it is still 2.5 times greater than the strongest currently available fibres. They are very tough, as well, suggesting that early applications might be in things like bullet proof vests and flak jackets, for which, sadly, there seems to be growing demand. Another interesting early application of nanotubes highlighted by the Science article is as processing aids for conventional carbon fibres, where it seems that the addition of only 1% of carbon nanotubes to the precursor fibre can increase the strength of the resulting carbon fibre by 64%.

Nanotubes from the Windle group
“Dogbone” carbon nanotubes produced by drawing from a CVD furnace. Transmission electron micrograph by Marcelo Motta, from the Cambridge research group of Alan Windle. First published in M. Motta et al. “High Performance Fibres from ‘Dog-Bone’ Carbon Nanotubes”. Advanced Materials, 19, 3721-3726, 2007.

Mobility at the surface of polymer glasses

Hard, transparent plastics like plexiglass, polycarbonate and polystyrene resemble glasses, and technically that’s what they are – a state of matter that has a liquid-like lack of regular order at the molecular scale, but which still displays the rigidity and lack of ability to flow that we expect from a solid. In the glassy state the polymer molecules are locked into position, unable to slide past one another. If we heat these materials up, they have a relatively sharp transition into a (rather sticky and viscous) liquid state; for both plexiglass and polystyrene this happens around 100 °C, as you can test for yourself by putting a plastic ruler or a (polystyrene) yoghourt pot or plastic cup into a hot oven. But, things are different at the surface, as shown by a paper in this week’s Science (abstract, subscription needed for full paper; see also commentary by John Dutcher and Mark Ediger). The paper, by grad student Zahra Fakhraai and Jamie Forrest, from the University of Waterloo in Canada, demonstrates that nanoscale indentations in the surface of a glassy polymer smooth themselves out at a rate that shows that the molecules near the surface can move around much more easily than those in the bulk.

This is a question that I’ve been interested in for a long time – in 1994 I was the co-author (with Rachel Cory and Joe Keddie) of a paper that suggested that this was the case – Size dependent depression of the glass transition temperature in polymer films (Europhysics Letters, 27 p 59). It was actually a rather practical question that prompted me to think along these lines; at the time I was a relatively new lecturer at Cambridge University, and I had a certain amount of support from the chemical company ICI. One of their scientists, Peter Mills, was talking to me about problems they had making films of PET (whose tradenames are Melinex or Mylar) – this is a glassy polymer at room temperature, but sometimes the sheet would stick to itself when it was rolled up after manufacturing. This is very hard to understand if one assumes that the molecules in a glassy polymer aren’t free to move, as to get significant adhesion between polymers one generally needs the string-like polymers to mix themselves up enough at the surface to get tangled up. Could it be that the chains at the surface had more freedom to move?

We didn’t know how to measure chain mobility directly near a surface, but I did think we could measure the glass transition temperature of a very thin film of polymer. When you heat up a polymer glass, it expands, and at the transition point where it turns into a liquid, there’s a jump in the value of the expansion coefficient. So if you heated up a very thin film, and measured its thickness you’d see the transition as a change in slope of the plot of thickness against temperature. We had available to us a very sensitive thickness measuring technique called ellipsometry, so I thought it was worth a try to do the measurement – if the chains were more free to move at the surface than in the bulk, then we’d expect the transition temperature to decrease as we looked at very thin films, where the surface had a disproportionate effect.

I proposed the idea as a final year project for the physics undergraduates, and a student called Rachel Cory chose it. Rachel was a very able experimentalist, and when she’d got the hang of the equipment she was able to make the successive thickness measurements with a resolution of a fraction of an Ångstrom, as would be needed to see the effect. But early in the new year of 1993 she came to see me to say that the leukemia from which she had been in remission had returned, that no further treatment was possible, but that she was determined to carry on with her studies. She continued to come into the lab to do experiments, obviously getting much sicker and weaker every day, but nonetheless it was a terrible shock when her mother came into the lab on the last day of term to say that Rachel’s fight was over, but that she’d been anxious for me to see the results of her experiments.

Looking through the lab book Rachel’s mother brought in, it was clear that she’d succeeded in making five or six good experimental runs, with films substantially thinner than 100 nm showing clear transitions, and that for the very thinnest films the transition temperatures did indeed seem to be significantly reduced. Joe Keddie, a very gifted young American scientist then working with me as a postdoc, (he’s now a Reader at the University of Surrey) had been helping Rachel with the measurements and followed up these early results with a large-scale set of experiments that showed the effect, to my mind, beyond doubt.

Despite our view that the results were unequivocal, they attracted quite a lot of controversy. A US group made measurements that seemed to contradict ours, and in the absence of any theoretical explanation of them there were many doubters. But by the year 2000, many other groups had repeated our work, and the weight of evidence was overwhelming that the influence of free surfaces led to a decrease in the temperature at which the material changed from being a glass to being a liquid in films less than 10 nm or so in thickness.

But this still wasn’t direct evidence that the chains near the surface were more free to move than they were in the bulk, and this direct evidence proved difficult to obtain. In the last few years a number of groups have produced stronger and stronger evidence that this is the case; Jamie and Zahra’s paper I think nails the final uncertainties, proving that polymer chains in the top few nanometers of a polymer glass really are free to move. Among the consequences of this are that we can’t necessarily predict the behaviour of polymer nanostructures on the basis of their bulk properties; this is going to become more relevant as people try and make smaller and smaller features in polymer resists, for example. What we don’t have now is a complete theoretical understanding of why this should be the case.