Understanding structure formation in thin polymer films

This month’s issue of Nature Materials has a paper from my group which provides new insight into the way structure can emerge in ultrathin polymer films by self-assembly. It’s very easy to make a very uniform polymer film with a thickness somewhere between 5 and 500 nanometers; in a process called “spin-casting” you just flood a smooth, flat substrate with a solution of the polymer in an organic solvent like toluene, and then you spin the substrate round at a couple of thousand RPM. The excess solution flies off, leaving a thin layer from which the solvent quickly evaporates. This process is used all the time in laboratories and in industry; in the semiconductor industry it’s the way in which photoresist layers are laid down. If you use, not a single polymer, but a mixture of two polymers, as the solvent is removed then the two polymers will phase separate, like oil and water. What’s interesting is that sometimes they will break up into little blobs in the plane of the film, but other times they will split into two distinct layers, each of which might only be a few tens of nanometers thick. The latter situation, sometimes called “self-stratification”, can be potentially very useful. It’s an advantage for solar cells made from semiconducting polymers to have two layers like this, and Henning Sirringhaus, from Cambridge, (whose company, Plastic Logic, is actively commercialising polymer electronics) has shown that you can make a polymer field effect transistor in which the gate dielectric layer spontaneously self-stratifies during spin-coating.

The paper (which can be downloaded as a PDF here) describes the experiments that Sasha Heriot, who is a postdoc in my group, did to try and disentangle what goes on in this complex situation. Our apparatus (which was built by my former graduate student, James Sharp, now a lecturer at Nottingham University) consists of a spin-coating machine in which a laser shines on the film as it spins; we detect both the light that is directly reflected and the pattern of light that is scattered out of the direct beam. The reflected light tells us how thick the film is at any point during the 5 seconds which the whole process takes, while the scattered light tells us about the lateral structure of the film. What we find is that after the spin-coating process starts, the film first stratifies vertically. As the solvent is removed, the interface separating the two layers becomes wavy, and this wave grows until these two layers break up, leaving the pattern of droplets that’s seen in the final film. We don’t exactly know why the interface between the two self-stratified films becomes unstable, but we suspect it’s connected to how volatile the solvent is. When we do understand this mechanism properly, we should be able to design conditions for the spin-coating to get the final structure we want.

The relevance of this is that this kind of solvent-based coating process is cheap and scalable to very large areas. The aim is to control the nanostructure of thin films of functional materials like semiconducting polymers simply by adjusting the processing conditions. We want to get the system to make itself as far as possible, rather than having to do lots of separate fabrication steps. If we can do this reliably, then this will get us closer to commercial processes for making, for example, very cheap solar cells using simple printing technology, or simple combinations of sensors and logic circuits by ink-jet printing.

Soft Machines: The Foresight Verdict

I was pleasantly surprised, on picking up a copy of the Foresight Nanotech Institute’s quarterly newsletter, Foresight Nanotech Update (not yet on the web, but it will presumably appear here in due course), to see a two-page, detailed review of my book Soft Machines. It’s actually a pretty positive review – “Soft Machines is an informative and readable exploration of the nanoworld” is a line I can imagine a publicist being pleased to fillet. Perhaps not surprisingly the reviewer doesn’t completely accept my arguments about the feasibility or otherwise of the Drexlerian program, saying “the arguments that Jones produces seem largely sound as far as they go, but not thorough enough to be conclusive”. Actually that’s a conclusion that I’m very comfortable with. We’ll see what things look like over the next couple of years, after some more real debate and some more supporting science.

At the Foresight Vision Weekend

I’m in California, where the Foresight Institute’s Vision Weekend has just finished. I gave a talk, outlining my thoughts about where the soft approach to nanotechnology might lead in the longer term. This was received well enough, though I’m sure without convincing the whole audience. This weekend is supposed to be off the record, so I’ll not give a blow-by-blow account. But one curious thing, which is in principle already a matter of public record, is worth mentioning. If you had looked at the program on the web last week you would have seen that a debate between me and Ralph Merkle about the viability of soft vs hard approaches to radical nanotechnology was scheduled. This debate disappeared from the final version of the program and never happened, for reasons that weren’t explained to me. Maybe this was just a result of the difficulty of trying to fit in a lot of speakers and events. Nonetheless it seems a pity that a community that often complains about the lack of detailed technical discussion of the proposals in Nanosystems didn’t get the chance to hear just such a debate.

Nanotechnology gets complex

The theme of my book Soft Machines is that the nanomachines of biology operate under quite different design principles from those we are familiar with at the macroscale. These design principles exploit the different physics of the nanoworld, rather than trying to engineer around them. The combination of Brownian motion – the relentless shaking and jostling that’s ubiquitous in the nanoworld, at least at normal temperatures – and strong surface forces is exploited in the principle of self-assembly. Brownian motion and the floppiness of small scale structures are exploited in the principle of molecular shape change, which provides the way our muscles work. We are well on our way to exploiting both these principles in synthetic nanotechnology. But there’s another design principle that’s extensively used in Nature, that nanotechnologists have not yet exploited at all. This is the idea of chemical computing – processing information by using individual molecules as logic gates, and transmitting messages through space using the random motion of messenger molecules, driven to diffuse by Brownian motion. These are the mechanisms that allow bacteria to swim towards food and away from toxins, but they also underly the intricate way in which cells in higher organisms like mammals interact and differentiate.

One argument that holders of a mechanical conception of radical nanotechnology sometimes use against trying to copy these control mechanisms is that they are simply too complicated to deal with. But there’s an important distinction to make here. These control systems and signalling networks aren’t just complicated – they’re complex. Recent theory of the statistical mechanics of this sort of multiply connected, evolving networks is beginning to yield fascinating insights (see, for example, Albert-László Barabási’s website). It seems likely that these biological signalling and control networks have some generic features in common with other complex networks, such as the internet, and even, perhaps, free market economies. Rather than being the hopelessly complicated result of billions of years of aimless evolutionary accretion, we should perhaps think of these networks as being optimally designed for robustness in the noisy and unpredictable nanoscale environment.

It seems to me that if we are going to have nanoscale systems of any kind of complexity, we are going to have to embrace these principles. Maintaining rigid, central control of large scale systems always seems to be a superficially good idea, but such control systems are often brittle and fail to adapt to unpredictability, change and noise. The ubiquity of noise in the nanoscale world offers a strong argument for using complex, evolved control systems. But we still lack some essential tools for doing this. In particular, biological signalling relies on allostery. This principle underlies the operation of the basic logic gates in chemical computing; the idea is that when a messenger molecule binds to a protein, it subtly changes the shape of the protein and affects its ability to carry out a chemical operation. Currently synthetic analogues for this crucial function are very thin on the ground (see this abstract for something that seems to be going the right way). It would be good to see more effort put in this difficult, but exciting, direction.

Blog meets podcast

Soft Machines got a namecheck on the Berkeley Groks science radio show this week (you can download the MP3 here).

I don’t know whether to be more impressed that Soft Machines is so assiduously read by student broadcasters in search of material, or that one of the postdocs in my department is so addicted to obscure science podcasts that he noticed it and told me about it (thanks, Ashley). I’d like to say that they featured an in-depth discussion of some of the most serious issues this blog discusses, but instead, I’m afraid, it was the postscript to this item that caught their eye.

Other good science podcasts that Ashley recommends include the Science show from the Australian Broadcasting Corporation, here, and Nature Magazine’s podcast, here.

Framing nanotech: products, process, or program?

If you are a regulator or policy maker considering the possible impacts of nanotechnology, should you consider it solely in terms of the products it produces, should you think of it as a distinct process for making things, or should you ask about the more general socio-economic program of which it is part? This question is suggested by Sheila Jasanoff’s excellent new book, Designs on Nature. This book, recommended on Soft Machines the other day by James Wilsdon (see also James’s review of the book for the Financial Times), is a highly perceptive comparative study of the different ways in which the politics of biotechnology and genetic modification played out in the USA, the UK and Germany. Jasanoff finds one origin of the differences between the experience in the three countries in the different ways in which the technology was framed. In the USA, the emphasis was on asking whether the products of biotechnology were safe. In the UK, the issue was framed more broadly; the question was whether the process of genetic modification was in itself a cause for concern. In Germany, meanwhile, discussion of biotechnology could never escape the shadow of the complicity of German biomedical science with the National Socialist program, and the horrors that emerged from a state dedicated to the proposition than all men are not created equal. In this context, it was tempting to see biotechnology as part of a program in which science and a controlling, ordering state came together to subjugate both citizens and nature.

Since policy-makers, academics and activists are all looking at the unfolding debate around nanotechnology through the lens of the earlier GM debates, it’s worth asking how far this analysis can be applied to nanotechnology. The product-centred view is clearly in the ascendency in the USA, where the debate is centred almost exclusively over the issue of the possible toxicity of nanoparticles. But the process-centred view is not really managing to establish itself anywhere. The problem is, of course, that nanotechnology does not present a distinct process in the way that genetic modification does. This is despite the early rhetoric of the National Nanotechnology Initiative – the slogan “building the world atom-by-atom” does suggest that nanotechnology offers a fundamentally different way of doing things, but the reality, of course, is that today’s nanotechnology products are made by engineering processes which are only incremental developments of ones that have gone before. It remains to be seen whether a radically different nanotechnology will emerge which will make this framing more relevant.

Should we, then, worry about nanotechnology as part of a broader, socio-economic program? This is clearly the central position of anti-nanotechnology campaigning groups like the ETC group. They may find the nano-toxicity issue to be a convenient stick to beat governments and nano-industry with, but their main argument is not with the technology in itself, but with the broader issues of globalization and liberal economics. Of course, many of those most strongly in favour of nanotechnology have their own program, too – the idea of transhumanism, with its high profile adherents such as Ray Kurzweil. It’s possible that opposition to nanotechnology will increasingly come to be framed in terms of opposition to the transhumanist program, along the lines of Bill McKibben’s book Enough.

Nanotechnology in the New Straits Times

My friend, colleague and collaborator from across the road in the chemistry department here at Sheffield, Tony Ryan, went to Malaysia and Singapore the week before last, and one result was this article in the New Straits Times, in which he gives a succinct summary of the current state of play in nanotechnology. He was rewarded by a mildly cross email this morning from K. Eric Drexler. Actually I think Tony’s interview is pretty fair to Drexler – he gives him a big place in the history of the subject, and on the vexed question of nanobots, he says “This popular misconception has been popularised by people who misunderstood the fantastic book Engines of Creation by K. Eric Drexler.

There was also a useful corrective to those of us worried that nanotechnology is getting overexposed. The writer describes how the article originated from a “short, balding man in the public relations industry” who said about nanotechnology that it’s “”the latest buzzword in the field of science and is making waves globally”. On the contrary, our journalist says… “Buzzword? It most certainly is not. My editor and I looked at each other and agreed that it is more a word that one hears ONLY ever so occasionally. “

The Stalinists of public engagement…

The recent pamphlet from Demos on the need for public engagement about nanotechnology and other new technologies has received forthright criticism from the editor of Research Fortnight, William Bown. The original editorial raised the spectre of Lysenko, and accused advocates of public engagement of being “worse than Stalinists”. One of the authors of the Demos paper, James Wilsdon, has energetically responded. The resulting exchange of letters will be published in Research Fortnight, but those readers who unaccountably have forgotten to renew their subscription to that organ can read them on the Demos blog.

I’m not going to attempt to summarise Bown’s argument here (mainly because I find it rather difficult to follow). But I will single out one statement he makes to take issue with. Arguing that public engagement simply provides a mechanism to help governments avoid making difficult decisions, he says “The question for these two [Tony Blair and Gordon Brown], and their companions in Parliament, is not whether they think science is shiny and exciting; it is whether they back the deployment of nanotechnology.” This seems to me to combine naiveity about politics with a real misunderstanding of the nature of the science. All the debates about nanotechnology should have made one thing absolutely clear: nanotechnology is not a single thing (like nuclear power, say) that we can choose to use or to turn away from. It’s a whole variety of different technologies and potential technologies, with an equally wide range of potential applications. Choices need to be made – are being made right now, in fact – about which research avenues should be pursued, and which should be left to others, and one of the key roles of public engagement is to inform those choices.

Swimming strategies for nanobots

Physics works differently at the nanoscale, and this means that design principles that are familiar in the macro-scale world may not work when shrunk. A great example of this is the problem of how you would propel a nanoscale swimmer through water. To a human-scale swimmer, water resists forward motion by virtue of the fact that it has inertia. But on the nanoscale, it is the viscosity of water that is the dominant factor. To imagine what it feels like trying to swim at the nanoscale, you need to imagine being immersed in a vat of the most sticky molasses.

The mathematics of this situation is intriguing, and it’s been known for a while that any simple, back-and-forth motion won’t get you anywhere. Imagine a scallop, trying to swim by opening its shell slowly, and then shutting it suddenly. This strategy works fine in the macroscopic world, but on the nanoscale you can show that all the ground the scallop gains when it shuts its shell is lost again when it opens it, no matter how big the difference in speed between the forward and backward strokes. To get anywhere, you need some kind of non-reciprocal motion – a motion that looks different when time-reversed. In 2004, Ramin Golestanian and coworkers showed that three spheres joined together could make a nanoscale swimmer. Here’s an article about this work in Physical Review Focus, with a link to a neat animation; here’s another article in Technology Review: Teaching Nanotech to Swim.

This story has moved forward in two ways since this report. Earlier this year, Ramin Golestanian, together with Tannie Liverpool, from Leeds University, and Armand Ajdari, from ESPCI in Paris, analysed another way of propelling a nanoscale submarine. In this work, published in Physical Review Letters in June this year (abstract here, subscription required for full article), they considered a nanoscale vessel with an enzyme attached to the hull at one point. The enzyme catalyses a chemical reaction that produces a stream of reactants like a rocket’s exhaust. Like a rocket, this has the effect of propelling the vessel along, but the physics underlying the effect is quite different. It’s not the inertia of the exhaust that propels the vessel forward; instead it is the effect of the collisions of the reactant molecules as they undergo random, Brownian motion that have the effect of propelling our nanobot forward.

And today, Nature published an experimental report of a miniature swimmer (editor’s summary; full paper requires subscription) which illustrates some of these principles. In this work (from Bibette and coworkers, also at ESPCI, Paris), chains of magnetic nano-particles form a tail which wiggles when an oscillating magnetic field is applied, pulling a payload along.

Ramin has just joined us in the physics department at Sheffield, so I look forward to working with him to take some more steps on the road to a swimming nanobot.

Nanomedicine gets clinical

Everyone agrees that some of the key applications of nanotechnology will be in medicine. Within medicine, drug delivery is an obvious target. So when can we expect to see nano-enabled medicines on the pharmacy shelves? The answer, as usual, depends on what you mean by nanotechnology. Many people have welcomed Abraxane™, which received FDA approval for use for breast cancer earlier this year, as the first nano-drug. But a number of other drugs already in clinical use have just as much right to the nano- label.

Ruth Duncan gives a useful list of nano-medicines in current clinical use in an article in Nano TodayNanomedicine gets clinical (I’ve already referred to this article here). We can summarise the key functions that nano-engineering confers on these products as packaging and targeting – the active drug molecules need to be protected from the body’s systems for repelling foreign materials, and if possible they need to be actively targetted to the parts of the body at which the therapy is directed. For the anti-cancer therapeutics that dominate this list, this target is the tumour.

One approach to targetting is to wrap the molecule up in a liposome – a nanoscale container that is formed, by self-assembly, when soap-like lipid molecules form a bilayer sheet which folds over on itself to make a bag. These are the same structures that are already incorporated in some cosmetics. DaunoXome® consists of the anti-cancer drug daunorubicin encapsulated in liposomes, and is used for the treatment of HIV–related Kaposi’s sarcoma. Doxil® and Caelyx® are liposomal preparations of the related drug doxorubicin, and are used for advanced ovarian cancers. Simple liposomes have quite a short lifetime in the body; in Doxil the surfaces of the liposome are modified by being coated by the water soluble polymer polyethylene glycol.

Rather than putting the drug in a liposome, and then coating the liposome with polymer, it is possible simply to attach polyethylene glycol directly to the drug. This is the basis of “polymer therapeutics” (this is Ruth Duncan’s own field). Examples in clinical use include Oncaspar®, for acute lymphoblastic leukemia, and Neulasta®, used to decrease infection in patients receiving chemotherapy. Both these drugs consist of a protein drug molecule which is disguised from the body by being coated in a diffuse cloud of polyethylene glycol (PEG). How PEG works is still not entirely clear, but the basis of the effect is that it forms a diffuse layer which resists protein adsorption.

Mylotarg®, a drug licensed in the USA for acute myeloid leukemia, is a (currently rather rare) example of a targetted drug. The drug itself – a potent anti-tumor antibiotic – is chemically linked to an antibody – a protein molecule which specifically binds to chemical groups on the outside of the target cells. In Abraxane™, it is the drug molecule itself, paclitaxel, that is nanoengineered – it is prepared in a nanoparticulate form to improve its solubility; the nanoparticles are coated with the blood protein albumin.

So what we see now are a number of products which use individual tricks of nanoengineering to improve their effectiveness. What we will probably see in the future is the combination of more than one of these functions in a single product – moving beyond clever formulation to integrated nanodevices.