Then worms ‘ll come and eat thee oop

I had a late night in the prosperous, liberal, Yorkshire town of Ilkley last night, doing a talk and question and answer session on nanotechnology at the local Cafe Philosophique. An engaged and eclectic audience kept the discussion going well past the scheduled finish time. Two points particularly struck me. One recurring question was whether it was ever realistic to imagine that we can relinquish technological developments with negative consequences – “if it can be done, it will be done” was the comment made more than once. I really don’t like this conclusion, but I’m struggling to find convincing arguments against it. A more positive comment concerned the idea of regulation; we are used to thinking of this idea entirely in terms of narrow prohibitions – don’t release these nanoparticles into the environment, for example. But we need to work out how to make regulation a positive force that steers the technology in a desirable direction, rather than simply trying to sit on it.

(Non British readers may need to know that the headline is a line from a rather odd and morbid folk-song called “On Ilkley Moor baht hat”, sung mostly by drunken Yorkshiremen.)

Bad reasons to oppose nanotechnology: part 1

An article in this month’s The Ecologist Magazine, by the ETC group‘s Jim Thomas, makes an argument against nanotechnology that combines ignorance and gullibility with the risk of causing real harm to some of the worlds poorest people. What, he says, will happen to the poor cotton farmers and copper miners of the third world when nanotechnology-based fabric treatments, like those sold by the Nanotex Corp, make cotton obsolete, and carbon nanotubes replace copper as electrical conductors? This argument is so wrong on so many levels it’s difficult to know where to start.

To start with, is there any development economist who would actually argue that basing an third world economy on an extractive industry like copper mining is a good way to get sustained economic development and good living standards for the population as a whole? Likewise, it’s difficult to see that the industrial-scale farming of cotton, with its huge demand for water, can be anything other than an ecological disaster. Zambia is not exactly the richest country in Africa, and Kazakhstan is not a great advertisement for the environmental benefits of cotton growing.

And is the premise even remotely realistic? I wrote below about how novel the nanotex fibre treatments actually are; in fact there are many fibre treatments available now, some carrying a “nano” label, some not, which change handling and water resistance properties of a variety of textiles, both natural and artificial. These are just as likely to increase markets for natural fibres as for artificial ones. And as for nanotubes replacing copper, at a current cost of ��200 a gram this is not going to happen any time soon. What this argument demonstrates is that, unfortunately for a campaigning group, ETC is curiously gullible, with a propensity to mistake corporate press releases and the most uncritical nano-boosterism for reality.

This matters for two reasons. Firstly, on the positive side, nanotechnology really could benefit the environment and the world’s poor. Cheap ways of providing clean water and new sources of renewable energy are realistic possibilities, but they won’t happen automatically and there’ll be real debates about how to set priorities in a way which makes the technology bring benefits to the poor as well as the rich. What these debates are going to need from their participants is some degree of economic and scientific literacy. Secondly, there are some real potential downsides that might emerge from the development of nanotechnology; we need a debate that’s framed in a way that recognises the real risks and doesn’t waste energy and credibility on silly side-issues.

Luckily, there is at least one NGO that is demonstrating a much more sophisticated, subtle and intelligent approach – Greenpeace. The contribution of their chief scientist, Doug Parr, to a recent debate on nanotechnology held by the Royal Society in the light of their recent report, is infinitely more effective at focusing on the real issues.

Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

What is this thing called nanotechnology? Part 2. Nanoscience versus Nanotechnology

In the first part of my attempt to define nanotechnology terms, I discussed definitions of the nanoscale. Now I come to the important and underappreciated distinction between nanoscience and nanotechnology.

Nanoscience describes the convergence of physics, chemistry, materials science and biology to deal with the manipulation and characterisation of matter on the nanoscale.

Many subfields of these disciplines have been dealing with nanoscale phenomena for many years. A very non-exhaustive list of relevant sub-fields, with examples of topics in nanoscience, would include:

  • Colloid science. The characterisation and control of forces between sub-micron particles to control the stability of dispersions.
  • Metallurgy. The control of nanoscale structure to optimise mechanical and other properties – e.g. particle and precipitate hardening.
  • Molecular biology and biophysics. Structural characterisation at atomic resolution first of complex biomolecules, now of assemblies of macromolecules which function as nanomachines.
  • Polymer science. Systems such as block copolymers which self-assemble to form complex nanoscale structures, new architectures like hyperbranched polymers and dendrimers.
  • Semiconductor physics. Nanoscale low dimensional structures like multilayers, wires and dots exploiting quantum effects for new electronic and optoelectronic devices like light emitting diodes and lasers.
  • Supramolecular chemistry. The use of non-covalent interactions to create self-assembled nanoscale structures from molecular components.
  • The distinguishing feature of nanoscience is that increasingly we find methods and techniques from more than one of these existing subfields combined in novel ways.

    Nanotechnology is an engineering discipline which combines methods from nanoscience with the disciplines of economics and the market to create usable and economically viable products.

    Nanoscience and nanotechnology need to be distinguished. Without nanoscience, nanotechnology will not be possible. On the other hand, if you invest money in a nanoscience venture under the impression that it is nanotechnology, you are sure to be disappointed.

    In the next installment, I’ll discuss the various kinds of nanotechnology, from incremental technologies such as shampoos and textile treatments to the more radical visions.

    Will molecular electronics save Moore’s Law?

    Mark Reed, from Yale, was another speaker at a meeting I was at in New Jersey last week. He gave a great talk about the promise and achievement of molecular electronics which I thought was both eloquent and well-judged.

    The context for the talk is provided by the question marks hanging over Moore’s law, the well-known observation that the number of transistors per integrated circuit, and thus available computer power, has grown exponentially since 1965. There are strong indications that we are approaching the time when this dramatic increase, which has done so much to shape the way the world’s economy has changed recently, is coming to an end.

    The semiconductor industry is approaching a “red brick wall”. This phrase comes from the International Technology Roadmap for Semiconductors, an industry consensus document which sets out the technical barriers that need to overcome in order to maintain the projected growth in computer power. In the technical tables, cells which describe technical problems with no known solution are coloured red, and by 2007-8 these red cells proliferate to the point of becoming continuous – hence the red brick wall.

    A more graphic illustration of the problems the industry faces was provided in a plot that Reed showed of surface power density as a function of time. This rather entertaining plot showed that current devices have long surpassed the areal power density of a hot-plate, are not far away from the values for a nuclear reactor, and somewhere around the middle of the next decade will surpass the surface of the sun. Now I find the warm glow from my Powerbook quite comforting on my lap but carrying a small star around with me is going to prove limiting.

    So the idea that molecular electronics might help overcome these difficulties is quite compelling. In this approach, individual molecules are used as the components of integrated circuits, as transistors or diodes, for example. This provides the ultimate in miniaturisation.

    The good news is that (despite the Sch??n debacle) there are some exciting and solid results in the field. The simplest devices, like diodes, have two terminals, and there is no doubt that single molecule two-terminal devices have been convincingly demonstrated in the lab. Three terminal devices, like transistors, seem to be vital to make useful integrated circuits, though, and there progress has been slower. It’s difficult enough to wire up two connections to a single molecule, but gluing a third one on is even harder. This feat has been achieved for carbon nanotubes.

    What’s the downside? The carbon nanotube transistors have a nasty and underpublicised secret – the connections between the nanotubes and the electrodes are not, in the jargon, Ohmic – that means that electrons have to be given an extra push to get them from the electrode into the nanotube. This makes it difficult to scale them down to the small sizes that would be needed to make them competitive with silicon. And the single molecule devices have the nasty feature that every one is different. Conventional microelectronics works because every one of the tens of millions of transistors on something like a Pentium are absolutely identical. If the characteristics of each of the components were to randomly vary the whole way we currently do computing would need to be rethought.

    So it’s clear to me that molecular electronics remains a fascinating and potentially valuable research field, but it’s not going to deliver results in time to prevent a slow-down in the growth of computer power that’s going to begin in earnest towards the end of this decade. That’s going to have dramatic and far-reaching effects on the world economy, and it’s coming quite soon.

    Training the nanotechnologists of the future

    It’s that time of year when academic corridors are brightened by the influx of students, new and returning. I’m particularly pleased to see here at Sheffield the new intake for the Masters course in Nanoscale Science and Technology that we run jointly with the University of Leeds.

    We’ve got 29 students starting this year; it’s the fourth year that the course has been running and over that time we’ve seen a steady growth in demand. I hope that reflects an appreciation of our approach to teaching the subject.

    My view is that to work effectively in nanotechnology you need two things, First comes the in depth knowledge and problem-solving ability you get from studying a traditional discpline, whether that’s a pure science, like physics and chemistry, or an applied science, like materials science, chemical engineering or electrical engineering. But then you need to learn the languages of many other disciplines, because no physicist or chemist, no matter how talented at their own subject, will be able to make much of a contribution in this area unless they are able to collaborate effectively with people with very different sets of skills. That’s why to teach our course we’ve assembled a team from many different departments and backgrounds; physicists, chemists, materials scientists, electrical engineers and molecular biologists are all represented.

    Of course, the nature of nanotechnology is such that there’s no universally accepted curriculum, no huge textbook of the kind that beginning physicists and chemists are used to. The speed of development of the subject is such that we’ve got to make much more use of the primary research literature than one would for, say, a Masters course in physics. And because nanotechnology should be about practise and commercialisation as well as theory we also refer to the patent literature, something that’s, I think, pretty uncommon in academia.

    In terms of choice of subjects, we’re trying to find a balance between the hard nanotechnology of lithography and molecular beam epitaxy and the soft nanotechnology of self-assembly and bionanotechnology. The book of the course, “Nanoscale Science and Technology”, edited by my colleagues Rob Kelsall, Ian Hamley and Mark Geoghegan, will be published in January next year.

    What is this thing called nanotechnology? Part 1. The Nano-scale.

    Nanotechnology, of course, isn’t a single thing at all. That’s why debates about the subject often descend into mutual incomprehension, as different people use the same word to different things, whether it’s business types talking about fabric treatments, scientists talking about new microscopes, or posthumanists and futurists talking about universal assemblers. I’ve attempted to break the term up a little and separate out the different meanings of the word. I’ll soon put these nanotechology definitions on my website, but I’m going to try out the draft definitions here first. First, the all-important issue of scale.

    Nanotechnologies get their name from a unit of length, the nanometer. A nanometer is one billionth of a metre, but let’s try to put this in context. We could call our everyday world the macroscale. This is the world in which we can manipulate things with our bare hands, and in rough terms it covers about a factor of a thousand. The biggest things I can move about are about half a meter big (if they’re not too dense), and my clumsy fingers can’t do very much with things smaller than half a millimeter.

    We’ve long had the tools to extend the range of human abilities to manipulate matter on smaller scales than this. Most important is the light microscope, which has opened up a new realm of matter – the microscale. Like the macroscale, this also embraces roughly another factor of a thousand in length scales. At the upper end, objects half a millimeter or so in size provide the link with the macroscale; still visible to the naked eye, handling them becomes much more convenient with the help of a simple microscope or even a magnifying glass. At the lower end, the wavelength of light itself, around half a micrometer, gives a lower limit on the size of objects which can be discriminated even with the most sophisticated laboratory light microscope.

    Below the microscale is the nanoscale. If we take as the upper limit of the nanoscale the half-micron or so that represents the smallest object that can be resolved in a light microscope, then another factor of one thousand takes us to half a nanometer. This is a very natural lower limit for the nanoscale, because it is a typical size for a small molecule. The nanoscale domain, then, in which nanotechnology operates, is one in which individual molecules are the building blocks of useful structures and devices.

    These definitions are by the nature arbitrary, and it’s not worth spending a lot of time debating precise limits on length scales. Some definitions – the US National Nanotechnology Initiative provides one example – uses a smaller upper limit of 100 nm. There isn’t really any fundamental reason for choosing this number over any other one, except that this definition carries the authority of President Clinton, who of course is famous for the precision of his use of language. Some other definitions attempt to attach some more precise physical significance to this upper length limit on nanotechnology, by appealing to some length at which finite size effects, usually of quantum origin, become important. This is superficially appealing but unattractive on closer examination, because the relevant length-scale on which these finite size effects become important differs substantially according to the phenomenon being looked at. And this line of reasoning leads to an absurd, but commonly held view, that the nanoscale is simply the length-scale on which quantum effects become important. This is a very unhelpful definition when one thinks about it for longer than a second or two; there are plenty of macroscopic phenomena that you can’t understand without invoking quantum mechanics. Magnetism and the electronic behaviour of semiconductors are two everyday examples. And equally, many interesting nanoscale phenomena, notably virtually all of cell biology, don’t really involve quantum mechanical effects in any direct way.

    So I’m going to stick to these twin definitions – it’s the nanoscale if it’s too small to resolve in an ordinary light microscope, and if it’s bigger than your typical small molecule.

    None but the brave deserve the (nano)fair

    I’m in St Gallen, Switzerland, in the unfamiliar environment (for an academic) of a nanotechnology trade fair. The commercialisation arm of our polymer research activities in the University of Sheffield, the Polymer Centre, is one of the 14 UK companies and organisations that are exhibiting as part of the official UK government stall at Nanofair 2004.

    It’s interesting to see who’s exhibiting. The majority of exhibitors are equipment manufacturers, which very much supports one conventional wisdom about nanotechology as a business, which is that the first people to make money from it will be the suppliers of the tools of the trade. Perhaps the second category are those countries and regions who are trying to promote themselves as desirable locations for businesses to relocate to. Companies that actually have nanotechnology products for actual consumer markets are very much in the minority, though there are certainly a few interesting ones there.

    Alternative photovoltaics (dye-sensitised and/or polymer-based) are making a strong showing, helped by a lecture from Alan Heeger, largely about Konarka. This must be one of the major areas where incremental nanotechnology has the potential to make a disruptive change to the economy. A less predictable, but fascinating stand, for me, was from a Swiss plastics injection moulding company called Weidmann. Injection moulding is the familiar (and very cheap) way in which many plastic items, like the little plastic toys that come in cereal boxes, are made. Weidmann are demonstrating an injection moulded part in an ordinary commodity polymer with a controlled surface topography at the level of 5-10 nanometers. To me it is stunning that such a cheap and common processing technology can be adapted (certainly with some very clever engineering) to produce nanostructured parts in this way. Early applications will be to parts with optical effects like holograms directly printed in, and more immediately microfluidic reactors for diagnostics and testing.

    The UK has a big presence here, and our stand has some very interesting exhibitors on it. I’ll single out Nanomagnetics which uses a naturally occurring protein to template the manufacture of magnetic nanoparticles with very precisely controlled sizes. These nanoparticles are then used either for high density data storage applications or for water purification, as removable forward osmosis agents. This is a great application of exploiting biological nanotechnology that very much is in accord with the philosophy outlined in my book Soft Machines; I should declare an interest in that I’ve just joined the scientific advisory board of this company.

    The UK government is certainly working hard to promote the interests of its nascent nanotechnology industry. Our stall is full of well-dressed and suave diplomats and civil servants. However, one of the small business exhibitors was muttering a little that if only they were willing to spend the money directly supporting the companies with no-strings contracts, as the US government is doing with companies like Nanosys, then maybe the UK’s prospects would be even brighter.

    If biology is so smart, how come it never invented the mobile phone/iPod/Ford Fiesta?

    Chris Phoenix, over on the CRN blog, in reply to a comment of mine, asked an interesting question that I replied at such length to that I feel moved to recycle it here. His question was, given that graphite is a very strong material, and given that graphite sheets of more than 200 carbon atoms have been synthesized with wet chemistry, why is it that life never discovered graphite? From this he questioned the degree to which biology could be claimed to have found optimum or near optimum solutions to the problems of engineering at the nanoscale. I answered his question (or at least commented on it) in three parts.

    Firstly, I don’t think that biology has solved all problems it faces optimally – it would be absurd to suggest this. But what I do believe is that the closer to the nanoscale one is, the more optimal the solutions are. This is obvious when one thinks about it; the problems of making nanoscale machines were the first problems biology had to solve, it had the longest to do it, and at this point the it was closest to starting from a clean slate. In evolving more complex structures (like the eye) biology has to coopt solutions that were evolved to solve some other problem. I would argue that many of the local maxima that evolution gets trapped in are actually near optimal solutions of nanotechnology problems that have to be sub-optimally adapted for larger scale operation. As single molecule biophysics progresses and indicates just how efficient many biological nanomachines are this view I think gets more compelling.

    Secondly, and perhaps following on from this, the process of optimising materials choice is very rarely, either in biology or human engineering, simply a question of maximising a single property like strength. One has to consider a whole variety of different properties, strength, stiffness, fracture toughness, as well as external factors such as difficulty of processing, cost (either in money for humans or in energy for biology), and achieve the best compromise set of properties to achieve fitness for purpose. So the question you should ask is, in what circumstances would the property of high strength be so valuable for an organism, particularly a nanoscale organism, that all other factors would be overruled. I can’t actually think of many, as organisms, particularly small ones, generally need toughness, resilience and self-healing properties rather than outright strength. And the strong and tough materials they have evolved (e.g. the shells of diatoms, spider silk, tendon) actually have pretty good properties for their purposes.

    Finally, don’t forget that strength isn’t really an intrinsic property of materials at all. Stiffness is determined by the strength of the bonds, but strength is determined by what defects are present. So you have to ask, not whether evolution could have developed a way of making graphite, but whether it could have developed a way of developing macroscopic amounts of graphite free of defects. The latter is a tall order, as people hoping to commercialise nanotubes for structural applications are going to find out. In comparison the linear polymers that biology uses when it needs high strength are actually much more forgiving, if you can work out how to get them aligned – it’s much easier to make a long polymer with no defects than it is to make a two or three dimensional structure with a similar degree of perfection.

    Lord of the Rings

    As light relief after the last rather dense post, here’s one of the of the sillier exchanges from Monday’s round-up of events at the British Association meeting:

    Quentin Cooper (compere of the event)
    – I noticed that one of the speakers described Drexler’s book “Engines of Creation” as the “Lord of the Rings” of nanotechnology, is that right?

    Me
    – No, Engines of Creation is “The Hobbit” of nanotechnology, it’s short, easy-to-read and everyone likes it. “Nanosystems” is “The Lord of the Rings”, it’s long, dense, half the world thinks it’s the best book ever written and the other half thinks it’s rubbish.

    Henry Gee (Nature magazine)
    – Are you sure it’s not the Silmarillion?