Keeping the nanotech score

A claim from Lux research (reported in Small Times here) that China is now second only to the USA in its output of academic nanoscience papers is being met with some scepticism over on Nanodot. While there is clearly a real and important story about the huge recent growth in nanoscience capability in China, I’m also a bit sceptical about the central claim of this story, about China’s publication share. Of course, I don’t know about the detailed methodology in the publications study the Lux report cites. But I do know how a study which reached a very similar conclusion, commissioned for the UK’s science funding agency EPSRC, was done. Essentially, a database search was done for papers with “nano” or some compound thereof in the title.

I can do this too. If we look in “Web of Science” at papers published in 2004 and 2005 with “nano” or a compound thereof in title or abstract, we find that from a total of 59,938 papers, 10,546 – 18% – have at least one address from China. This is still behind the USA, with 28%, but is ahead of Japan, at 11% and Germany, at 8%. The UK is futher behind still, at 4%. (actually, the UK shows up only a pitiful total of 27 papers – 2370 are listed under England, with Wales and Scotland adding a further 487. I never realised British science had such separatist tendencies!). Of course, working out the sums this way will give a set of percentages that add to a total of more than 100%, since many papers have coauthors from different countries.

What’s wrong with this is perhaps only clear to scientists who are working in the field. When I think of what I believe to be the most significant papers in nanoscience, most of them simply don’t mention “nano” anywhere in the title. Why should they? Unless they are actually about carbon nanotubes, their title and abstract will generally refer to something much more specific than the rather general and all-encompassing “nano” label. We can get some feel for the fraction of significant and relevant papers that are excluded by this methodology by asking what proportion of papers by leaders in the nanoscience field would actually show up in a search like this. For example, taking a few more or less random US nanoscientists, only 24% of Whitesides’s papers would show up, 50% of James Heath’s, and even the rather radical and hardcore nanoscience of Ned Seeman and Fraser Stoddart still only pass the “nano” test 54% and 31% of the time respectively. Mark Ratner, despite being a prominent “nano” author, similarly would have nearly 70% of his publications slip undetected through the “nano” net.

And here in the UK, are we lagging behind quite so badly? Maybe, but again if we look at the output of some of our most prominent nanoscientists, we find most of their output is missed by this kind of bibliometric analysis. Of Richard Friend’s 35 papers, only 20% show up in this kind of search, while my Sheffield colleague, quantum dot guru Maurice Skolnick, similarly produced 35 papers, of which precisely 1 passed the nano-test.

I’m labouring the point now, and I’m sure the Lux people would say they’ve done their search in a much more sophisticated way. But I’m still convinced that any kind of mechanistic, keyword based search on the scientific literature is likely to lead to a highly distorted result, simply because what counts as “nanoscience” is so ill-defined. What you are seeing is not an accurate measure of nanoscience output, but a reflection of how strong is the fashion for attaching a “nano” label to ones work. This, of course, is somewhat unfair to people who are studying nanotubes, for example, who can hardly avoid putting “nano” in their titles and abstracts, but one’s strongly tempted to view the ratio (nano papers/total papers) as a kind of “nanohype index”. There is clearly genuine growing strength in China’s nanoscience output, and there is probably cause for concern in the UK, but these rather crude measures need to be taken with a substantial pinch of salt.

(And how do I score myself on the nanohype index? 7% on a total of 15 papers, I’m perversely proud to report).

Self-assembly vs self-organisation – can you tell the difference?

Self-assembly and self-organisation are important concepts in both nanotechnology and biology, but the distinction between them isn’t readily apparent, and this can cause considerable confusion, particularly when the other self-word – self-replication– is thrown into the mix.

People use different definitions, but it seems to me that it makes lots of sense to reserve the term self-assembly for equilibrium situations. As described in my book Soft Machines, the combination of programmed patterns of stickiness in nanoscale objects and constant Brownian motion mean that on the nanoscale complex 3-dimensional structures can assemble themselves from their component parts with no external intervention, purely driven by the tendency of systems to minimise their free energy in accordance with the second law of thermodynamics.

We can then reserve self-organisation as a term for those types of pattern forming system which are driven by a constant input of energy. A simple prototype from physics are the well-defined convection cells you get if you heat a fluid from below, while in chemistry there are the beautiful patterns you get from systems that combine some rather special non-linear chemical kinetics with slow diffusion – the Belousov-Zhabotinsky reaction being the most famous example. A great place to read about such systems is the book by Philip Ball – The self-made tapestry – pattern formation in nature (though Ball doesn’t in fact make the distinction I’m trying to set up here).

Self-assembly is pretty well understood, and it’s clear that at small length scales it is important in biology. Protein folding, for example, is a very sophisticated self-assembly process, and viable viruses can be made in the test-tube simply by mixing up the component proteins and nucleic acid. Self-organisation is much less well understood; it isn’t entirely clear that there are universal principles that underly the many different examples observed, and the relevance of the idea in biology is still under debate. There’s a very nice concrete example of the difference between the two ideas reported in a recent issue of Physical Review Letters (abstract here, full PDF preprint here). These authors consider a structural feature of living cells – the pattern of embedded proteins in the cell membrane – and ask, with the help of mathematical models, whether this pattern is likely to arise from equilibrium self-assembly or non-equilibrium self-organisation. The conclusion is that both processes can lead to patterns such as the ones observed, but that self-assembly leads to smaller scale patterns which take longer to develop.

One thing one can say with certainty – living organisms can’t arise wholly from self-assembly, because we know that in the absence of a continuous supply of energy they die. In summary, viruses self-assemble, but elephants (perhaps) self-organise.

Radical innovation in nanomaterials

Wednesday found me, yet again, in London, this time for a one-day meeting organised by the Royal Academy of Engineering called Radical Innovation in Nanomaterials (PDF link). The speakers were a mix of industrialists and innovation theorists, if I can put it that way, with me thrown in for light entertainment. I must say I find the idea of finding or creating a theory of radical innovation which would allow one to manage it predictably a bit hard to accept. But that’s presumably why I’m a humble academic rather than a high-flying business leader (or perhaps more pertinently, the multi-millionaire author of airport business books).

The talks from the industrialists were perhaps more interesting, not least because the underlying message coming out from all of them was so similar. On the face of it, the companies represented couldn’t have been more different. There were two global giants, the US based chemical company du Pont, and the Europe based pharmaceutical major, GlaxoSmithKline, and one relative minnow – the recently floated UK nanomaterials company Oxonica (whose CEO’s proud boast was that they are the only European pure nanotech company with products generating significant revenue). But the changing environment they were talking about was the same, and one that very much resonates with my comments earlier this week. It’s an atomised world in which innovation and intellectual property is generated by many different organisations – in universities and research institutions, in small start-up companies, but less and less in big corporate R&D labs. Core functions like production are increasingly outsourced, and companies like Oxonica flourish best as brokers, identifying useful intellectual property whereever they can, working with contract manufacturers to realise physical products, and then finding other partners – typically large consumer oriented companies – to develop markets for them.

It’s a model that fits well with prevailing neo-liberal orthodoxies about taking the globalised division of labour to the extreme. Of course it’s a model that must take for granted the absolute integrity and fungibility of intellectual property. I can’t help feeling that this leads to some major potential fragilities, given the difficulties that international patent law is currently going through. The other question that it seems to leave unanswered is this: if production is outsourced and essentially commoditised, who is going to drive the radical innovations, not in the products themselves, but in ways of making things? The orthodox answer, of course, is that competition by itself will do the job. Maybe.

Who’s in charge?

I spent Saturday afternoon in the Natural History Museum in London, not looking at the dinosaurs, but taking part in an event organised by the good people at Demos (not forgetting their colleagues at Lancaster) – nanoscientists-meet-nanopublics.

The format was a very gently moderated group discussion between nanoscientists of various ages (I think, alas, I was the oldest) and a group of members of the public who have been involved in a series of focus group discussions about nanotechnology. I’d summarise the demographic of my group as being “North London soccer mums” (with deep apologies to any of you who might read this!) – and I think it’s fair to say that the overall feeling towards nanotechnology was pretty negative. This was based on two things – an unease about untested nanoparticles in cosmetics, and a deeper unhappiness about the whole idea of human enhancement, particularly in a military context. I think we had a fairly productive discussion about both aspects.

One of the interesting things that came out in the discussion was this worry about “who is in charge”. I think it’s a natural human assumption to think that there is someone or some organisation that has the power to initiate change or to prevent it, if it is judged undesirable. But that’s not how science works in a liberal, globalised, market-driven system. I think this realisation that there really isn’t anyone in charge – not just in nanotechnology or any other part of science, but in all sorts of aspects of modern life – is what so many people find so frightening about the world we live in. But is there any alternative?

(Second) Best of Small Tech

Small Times, the US-based trade magazine for mico- and nano-technology, announced its annual Best of Small Tech awards yesterday. I was delighted to find that I was a runner-up in the Advocate category. Since the winner in this category was Fred Kavli, whose Kavli Foundation has endowed a number of chairs and insitutions in nanoscience, and has established a $1 million biennial prize for nanoscience, I can’t feel too hard done by for missing the top slot.

I was pleased to see a few other British names in there, too. Kevin Matthews, CEO of the nanomaterials company Oxonica, won the business leader award, and Peter Dobson, an Oxford University professor who originally spun out Oxonica, won the innovator award. David Fyfe, CEO of the Cambridge University spin-out Cambridge Display Technology, was a runner-up in the business leader category.

Debating the nanotechnology debate

David Forrest, who provided one of the pro-molecular nanotechnology voices at the Nottingham nanotechnology debate back in June, has posted some further reflections on the issues on his website. I’ll comment on these issues more soon.

Meanwhile, for those who weren’t able to get to the debate, I believe the film of the event is still being edited for web release, and the text is currently being transcribed, and will be published in the journal Nanotechnology perceptions. There’ll be more information here when I get it.

Understanding structure formation in thin polymer films

This month’s issue of Nature Materials has a paper from my group which provides new insight into the way structure can emerge in ultrathin polymer films by self-assembly. It’s very easy to make a very uniform polymer film with a thickness somewhere between 5 and 500 nanometers; in a process called “spin-casting” you just flood a smooth, flat substrate with a solution of the polymer in an organic solvent like toluene, and then you spin the substrate round at a couple of thousand RPM. The excess solution flies off, leaving a thin layer from which the solvent quickly evaporates. This process is used all the time in laboratories and in industry; in the semiconductor industry it’s the way in which photoresist layers are laid down. If you use, not a single polymer, but a mixture of two polymers, as the solvent is removed then the two polymers will phase separate, like oil and water. What’s interesting is that sometimes they will break up into little blobs in the plane of the film, but other times they will split into two distinct layers, each of which might only be a few tens of nanometers thick. The latter situation, sometimes called “self-stratification”, can be potentially very useful. It’s an advantage for solar cells made from semiconducting polymers to have two layers like this, and Henning Sirringhaus, from Cambridge, (whose company, Plastic Logic, is actively commercialising polymer electronics) has shown that you can make a polymer field effect transistor in which the gate dielectric layer spontaneously self-stratifies during spin-coating.

The paper (which can be downloaded as a PDF here) describes the experiments that Sasha Heriot, who is a postdoc in my group, did to try and disentangle what goes on in this complex situation. Our apparatus (which was built by my former graduate student, James Sharp, now a lecturer at Nottingham University) consists of a spin-coating machine in which a laser shines on the film as it spins; we detect both the light that is directly reflected and the pattern of light that is scattered out of the direct beam. The reflected light tells us how thick the film is at any point during the 5 seconds which the whole process takes, while the scattered light tells us about the lateral structure of the film. What we find is that after the spin-coating process starts, the film first stratifies vertically. As the solvent is removed, the interface separating the two layers becomes wavy, and this wave grows until these two layers break up, leaving the pattern of droplets that’s seen in the final film. We don’t exactly know why the interface between the two self-stratified films becomes unstable, but we suspect it’s connected to how volatile the solvent is. When we do understand this mechanism properly, we should be able to design conditions for the spin-coating to get the final structure we want.

The relevance of this is that this kind of solvent-based coating process is cheap and scalable to very large areas. The aim is to control the nanostructure of thin films of functional materials like semiconducting polymers simply by adjusting the processing conditions. We want to get the system to make itself as far as possible, rather than having to do lots of separate fabrication steps. If we can do this reliably, then this will get us closer to commercial processes for making, for example, very cheap solar cells using simple printing technology, or simple combinations of sensors and logic circuits by ink-jet printing.

Soft Machines: The Foresight Verdict

I was pleasantly surprised, on picking up a copy of the Foresight Nanotech Institute’s quarterly newsletter, Foresight Nanotech Update (not yet on the web, but it will presumably appear here in due course), to see a two-page, detailed review of my book Soft Machines. It’s actually a pretty positive review – “Soft Machines is an informative and readable exploration of the nanoworld” is a line I can imagine a publicist being pleased to fillet. Perhaps not surprisingly the reviewer doesn’t completely accept my arguments about the feasibility or otherwise of the Drexlerian program, saying “the arguments that Jones produces seem largely sound as far as they go, but not thorough enough to be conclusive”. Actually that’s a conclusion that I’m very comfortable with. We’ll see what things look like over the next couple of years, after some more real debate and some more supporting science.

At the Foresight Vision Weekend

I’m in California, where the Foresight Institute’s Vision Weekend has just finished. I gave a talk, outlining my thoughts about where the soft approach to nanotechnology might lead in the longer term. This was received well enough, though I’m sure without convincing the whole audience. This weekend is supposed to be off the record, so I’ll not give a blow-by-blow account. But one curious thing, which is in principle already a matter of public record, is worth mentioning. If you had looked at the program on the web last week you would have seen that a debate between me and Ralph Merkle about the viability of soft vs hard approaches to radical nanotechnology was scheduled. This debate disappeared from the final version of the program and never happened, for reasons that weren’t explained to me. Maybe this was just a result of the difficulty of trying to fit in a lot of speakers and events. Nonetheless it seems a pity that a community that often complains about the lack of detailed technical discussion of the proposals in Nanosystems didn’t get the chance to hear just such a debate.

Nanotechnology gets complex

The theme of my book Soft Machines is that the nanomachines of biology operate under quite different design principles from those we are familiar with at the macroscale. These design principles exploit the different physics of the nanoworld, rather than trying to engineer around them. The combination of Brownian motion – the relentless shaking and jostling that’s ubiquitous in the nanoworld, at least at normal temperatures – and strong surface forces is exploited in the principle of self-assembly. Brownian motion and the floppiness of small scale structures are exploited in the principle of molecular shape change, which provides the way our muscles work. We are well on our way to exploiting both these principles in synthetic nanotechnology. But there’s another design principle that’s extensively used in Nature, that nanotechnologists have not yet exploited at all. This is the idea of chemical computing – processing information by using individual molecules as logic gates, and transmitting messages through space using the random motion of messenger molecules, driven to diffuse by Brownian motion. These are the mechanisms that allow bacteria to swim towards food and away from toxins, but they also underly the intricate way in which cells in higher organisms like mammals interact and differentiate.

One argument that holders of a mechanical conception of radical nanotechnology sometimes use against trying to copy these control mechanisms is that they are simply too complicated to deal with. But there’s an important distinction to make here. These control systems and signalling networks aren’t just complicated – they’re complex. Recent theory of the statistical mechanics of this sort of multiply connected, evolving networks is beginning to yield fascinating insights (see, for example, Albert-László Barabási’s website). It seems likely that these biological signalling and control networks have some generic features in common with other complex networks, such as the internet, and even, perhaps, free market economies. Rather than being the hopelessly complicated result of billions of years of aimless evolutionary accretion, we should perhaps think of these networks as being optimally designed for robustness in the noisy and unpredictable nanoscale environment.

It seems to me that if we are going to have nanoscale systems of any kind of complexity, we are going to have to embrace these principles. Maintaining rigid, central control of large scale systems always seems to be a superficially good idea, but such control systems are often brittle and fail to adapt to unpredictability, change and noise. The ubiquity of noise in the nanoscale world offers a strong argument for using complex, evolved control systems. But we still lack some essential tools for doing this. In particular, biological signalling relies on allostery. This principle underlies the operation of the basic logic gates in chemical computing; the idea is that when a messenger molecule binds to a protein, it subtly changes the shape of the protein and affects its ability to carry out a chemical operation. Currently synthetic analogues for this crucial function are very thin on the ground (see this abstract for something that seems to be going the right way). It would be good to see more effort put in this difficult, but exciting, direction.