Feel the vibrations

The most convincing argument that it must be possible to make sophisticated nanoscale machines is that life already does it – cell biology is full of them. But whereas the machines proposed by Drexler are designed from rigid materials drawing on the example of human-scale mechanical engineering, nature uses soft and flexible structures made from proteins. At the temperatures at which protein machines operate, random thermal fluctuations – Brownian motion – cause the structures to be constantly flexing, writhing and vibrating. How is it possible for a mechanism to function when its components are so wobbly?

It’s becoming more and more clear that the internal flexibility of proteins and their constant Brownian random vibration is actually vital to the way these machines operate. Some fascinating evidence for this view was presented at a seminar I went to yesterday by Jeremy Smith, from the University of Heidelberg.

Perhaps the most basic operation of a protein-based machine is the binding of another molecule – a ligand – to a specially shaped site in the protein molecule. The result of this binding is often a change in shape of the protein. It is this shape change, which biologists call allostery, which underlies the operation both of molecular motors and of protein signalling and regulation.

It’s easy to imagine ligand binding as being like the interaction between a lock and a key, and that image is used in elementary biology books. But since both ligand and protein are soft it’s better to think of it as an interaction between hand and glove; both ligand and protein can adjust their shape to fit better. But even this image doesn’t convey the dynamic character of the situation; the protein molecule is flexing and vibrating due to Brownian motion, and the different modes of vibration it can sustain – its harmonics, to use a musical analogy – are changed when the ligand binds. Smith was able to show for a simple case, using molecular dynamics simulations, that this change in the possible vibrations of the protein molecule plays a major role in driving the ligand to bind. Essentially, what happens is with the ligand bound the low frequency collective vibrations become lowered further in frequency – the molecule becomes effectively softer. This leads to an increase in entropy, which provides a driving force for the ligand to bind.

A highly simplified theoretical model of allosteric binding solved by my colleague up the road in Leeds, Tom McLeish , has just been published in Physical Review Letters (preprint, abstract, subscription required for full published article). This supports the notion that the entropy inherent in thermally excited vibrations of proteins plays a big role in ligand binding and allosteric conformational changes. As it’s based on rather a simple model of a protein it may offer food for thought for how one might design synthetic systems using the same principles.

There’s some experimental evidence for these ideas. Indirect evidence comes from the observation that if you lower the temperature of a protein far enough there’s a temperature – a glass transition temperature – at which these low frequency vibrations stop working. This temperature coincides with the temperature at which the protein stops functioning. More direct evidence comes from rather a difficult and expensive technique called quasi-elastic neutron scattering, which is able to probe directly what kinds of vibrations are happening in a protein molecule. One experiment Smith described directly showed just the sort of softening of vibrational modes on binding that his simulations predict. Smith’s seminar went on to describe some other convincing, quantitative illustrations of the principle that flexibility and random motion are vital for the operation of other machines such as the light driven proton pump bacteriorhodopsin and one of the important signalling proteins from the Ras GTPase family.

The important emerging conclusion from all this is this: it’s not that protein-based machines work despite their floppiness and their constant random flexing and vibrations, they work because of it. This is a lesson that designers of artificial nanomachines will need to learn.

What is this thing called nanotechnology? Part 2. Nanoscience versus Nanotechnology

In the first part of my attempt to define nanotechnology terms, I discussed definitions of the nanoscale. Now I come to the important and underappreciated distinction between nanoscience and nanotechnology.

Nanoscience describes the convergence of physics, chemistry, materials science and biology to deal with the manipulation and characterisation of matter on the nanoscale.

Many subfields of these disciplines have been dealing with nanoscale phenomena for many years. A very non-exhaustive list of relevant sub-fields, with examples of topics in nanoscience, would include:

  • Colloid science. The characterisation and control of forces between sub-micron particles to control the stability of dispersions.
  • Metallurgy. The control of nanoscale structure to optimise mechanical and other properties – e.g. particle and precipitate hardening.
  • Molecular biology and biophysics. Structural characterisation at atomic resolution first of complex biomolecules, now of assemblies of macromolecules which function as nanomachines.
  • Polymer science. Systems such as block copolymers which self-assemble to form complex nanoscale structures, new architectures like hyperbranched polymers and dendrimers.
  • Semiconductor physics. Nanoscale low dimensional structures like multilayers, wires and dots exploiting quantum effects for new electronic and optoelectronic devices like light emitting diodes and lasers.
  • Supramolecular chemistry. The use of non-covalent interactions to create self-assembled nanoscale structures from molecular components.
  • The distinguishing feature of nanoscience is that increasingly we find methods and techniques from more than one of these existing subfields combined in novel ways.

    Nanotechnology is an engineering discipline which combines methods from nanoscience with the disciplines of economics and the market to create usable and economically viable products.

    Nanoscience and nanotechnology need to be distinguished. Without nanoscience, nanotechnology will not be possible. On the other hand, if you invest money in a nanoscience venture under the impression that it is nanotechnology, you are sure to be disappointed.

    In the next installment, I’ll discuss the various kinds of nanotechnology, from incremental technologies such as shampoos and textile treatments to the more radical visions.

    Will molecular electronics save Moore’s Law?

    Mark Reed, from Yale, was another speaker at a meeting I was at in New Jersey last week. He gave a great talk about the promise and achievement of molecular electronics which I thought was both eloquent and well-judged.

    The context for the talk is provided by the question marks hanging over Moore’s law, the well-known observation that the number of transistors per integrated circuit, and thus available computer power, has grown exponentially since 1965. There are strong indications that we are approaching the time when this dramatic increase, which has done so much to shape the way the world’s economy has changed recently, is coming to an end.

    The semiconductor industry is approaching a “red brick wall”. This phrase comes from the International Technology Roadmap for Semiconductors, an industry consensus document which sets out the technical barriers that need to overcome in order to maintain the projected growth in computer power. In the technical tables, cells which describe technical problems with no known solution are coloured red, and by 2007-8 these red cells proliferate to the point of becoming continuous – hence the red brick wall.

    A more graphic illustration of the problems the industry faces was provided in a plot that Reed showed of surface power density as a function of time. This rather entertaining plot showed that current devices have long surpassed the areal power density of a hot-plate, are not far away from the values for a nuclear reactor, and somewhere around the middle of the next decade will surpass the surface of the sun. Now I find the warm glow from my Powerbook quite comforting on my lap but carrying a small star around with me is going to prove limiting.

    So the idea that molecular electronics might help overcome these difficulties is quite compelling. In this approach, individual molecules are used as the components of integrated circuits, as transistors or diodes, for example. This provides the ultimate in miniaturisation.

    The good news is that (despite the Sch??n debacle) there are some exciting and solid results in the field. The simplest devices, like diodes, have two terminals, and there is no doubt that single molecule two-terminal devices have been convincingly demonstrated in the lab. Three terminal devices, like transistors, seem to be vital to make useful integrated circuits, though, and there progress has been slower. It’s difficult enough to wire up two connections to a single molecule, but gluing a third one on is even harder. This feat has been achieved for carbon nanotubes.

    What’s the downside? The carbon nanotube transistors have a nasty and underpublicised secret – the connections between the nanotubes and the electrodes are not, in the jargon, Ohmic – that means that electrons have to be given an extra push to get them from the electrode into the nanotube. This makes it difficult to scale them down to the small sizes that would be needed to make them competitive with silicon. And the single molecule devices have the nasty feature that every one is different. Conventional microelectronics works because every one of the tens of millions of transistors on something like a Pentium are absolutely identical. If the characteristics of each of the components were to randomly vary the whole way we currently do computing would need to be rethought.

    So it’s clear to me that molecular electronics remains a fascinating and potentially valuable research field, but it’s not going to deliver results in time to prevent a slow-down in the growth of computer power that’s going to begin in earnest towards the end of this decade. That’s going to have dramatic and far-reaching effects on the world economy, and it’s coming quite soon.

    Training the nanotechnologists of the future

    It’s that time of year when academic corridors are brightened by the influx of students, new and returning. I’m particularly pleased to see here at Sheffield the new intake for the Masters course in Nanoscale Science and Technology that we run jointly with the University of Leeds.

    We’ve got 29 students starting this year; it’s the fourth year that the course has been running and over that time we’ve seen a steady growth in demand. I hope that reflects an appreciation of our approach to teaching the subject.

    My view is that to work effectively in nanotechnology you need two things, First comes the in depth knowledge and problem-solving ability you get from studying a traditional discpline, whether that’s a pure science, like physics and chemistry, or an applied science, like materials science, chemical engineering or electrical engineering. But then you need to learn the languages of many other disciplines, because no physicist or chemist, no matter how talented at their own subject, will be able to make much of a contribution in this area unless they are able to collaborate effectively with people with very different sets of skills. That’s why to teach our course we’ve assembled a team from many different departments and backgrounds; physicists, chemists, materials scientists, electrical engineers and molecular biologists are all represented.

    Of course, the nature of nanotechnology is such that there’s no universally accepted curriculum, no huge textbook of the kind that beginning physicists and chemists are used to. The speed of development of the subject is such that we’ve got to make much more use of the primary research literature than one would for, say, a Masters course in physics. And because nanotechnology should be about practise and commercialisation as well as theory we also refer to the patent literature, something that’s, I think, pretty uncommon in academia.

    In terms of choice of subjects, we’re trying to find a balance between the hard nanotechnology of lithography and molecular beam epitaxy and the soft nanotechnology of self-assembly and bionanotechnology. The book of the course, “Nanoscale Science and Technology”, edited by my colleagues Rob Kelsall, Ian Hamley and Mark Geoghegan, will be published in January next year.

    What is this thing called nanotechnology? Part 1. The Nano-scale.

    Nanotechnology, of course, isn’t a single thing at all. That’s why debates about the subject often descend into mutual incomprehension, as different people use the same word to different things, whether it’s business types talking about fabric treatments, scientists talking about new microscopes, or posthumanists and futurists talking about universal assemblers. I’ve attempted to break the term up a little and separate out the different meanings of the word. I’ll soon put these nanotechology definitions on my website, but I’m going to try out the draft definitions here first. First, the all-important issue of scale.

    Nanotechnologies get their name from a unit of length, the nanometer. A nanometer is one billionth of a metre, but let’s try to put this in context. We could call our everyday world the macroscale. This is the world in which we can manipulate things with our bare hands, and in rough terms it covers about a factor of a thousand. The biggest things I can move about are about half a meter big (if they’re not too dense), and my clumsy fingers can’t do very much with things smaller than half a millimeter.

    We’ve long had the tools to extend the range of human abilities to manipulate matter on smaller scales than this. Most important is the light microscope, which has opened up a new realm of matter – the microscale. Like the macroscale, this also embraces roughly another factor of a thousand in length scales. At the upper end, objects half a millimeter or so in size provide the link with the macroscale; still visible to the naked eye, handling them becomes much more convenient with the help of a simple microscope or even a magnifying glass. At the lower end, the wavelength of light itself, around half a micrometer, gives a lower limit on the size of objects which can be discriminated even with the most sophisticated laboratory light microscope.

    Below the microscale is the nanoscale. If we take as the upper limit of the nanoscale the half-micron or so that represents the smallest object that can be resolved in a light microscope, then another factor of one thousand takes us to half a nanometer. This is a very natural lower limit for the nanoscale, because it is a typical size for a small molecule. The nanoscale domain, then, in which nanotechnology operates, is one in which individual molecules are the building blocks of useful structures and devices.

    These definitions are by the nature arbitrary, and it’s not worth spending a lot of time debating precise limits on length scales. Some definitions – the US National Nanotechnology Initiative provides one example – uses a smaller upper limit of 100 nm. There isn’t really any fundamental reason for choosing this number over any other one, except that this definition carries the authority of President Clinton, who of course is famous for the precision of his use of language. Some other definitions attempt to attach some more precise physical significance to this upper length limit on nanotechnology, by appealing to some length at which finite size effects, usually of quantum origin, become important. This is superficially appealing but unattractive on closer examination, because the relevant length-scale on which these finite size effects become important differs substantially according to the phenomenon being looked at. And this line of reasoning leads to an absurd, but commonly held view, that the nanoscale is simply the length-scale on which quantum effects become important. This is a very unhelpful definition when one thinks about it for longer than a second or two; there are plenty of macroscopic phenomena that you can’t understand without invoking quantum mechanics. Magnetism and the electronic behaviour of semiconductors are two everyday examples. And equally, many interesting nanoscale phenomena, notably virtually all of cell biology, don’t really involve quantum mechanical effects in any direct way.

    So I’m going to stick to these twin definitions – it’s the nanoscale if it’s too small to resolve in an ordinary light microscope, and if it’s bigger than your typical small molecule.

    None but the brave deserve the (nano)fair

    I’m in St Gallen, Switzerland, in the unfamiliar environment (for an academic) of a nanotechnology trade fair. The commercialisation arm of our polymer research activities in the University of Sheffield, the Polymer Centre, is one of the 14 UK companies and organisations that are exhibiting as part of the official UK government stall at Nanofair 2004.

    It’s interesting to see who’s exhibiting. The majority of exhibitors are equipment manufacturers, which very much supports one conventional wisdom about nanotechology as a business, which is that the first people to make money from it will be the suppliers of the tools of the trade. Perhaps the second category are those countries and regions who are trying to promote themselves as desirable locations for businesses to relocate to. Companies that actually have nanotechnology products for actual consumer markets are very much in the minority, though there are certainly a few interesting ones there.

    Alternative photovoltaics (dye-sensitised and/or polymer-based) are making a strong showing, helped by a lecture from Alan Heeger, largely about Konarka. This must be one of the major areas where incremental nanotechnology has the potential to make a disruptive change to the economy. A less predictable, but fascinating stand, for me, was from a Swiss plastics injection moulding company called Weidmann. Injection moulding is the familiar (and very cheap) way in which many plastic items, like the little plastic toys that come in cereal boxes, are made. Weidmann are demonstrating an injection moulded part in an ordinary commodity polymer with a controlled surface topography at the level of 5-10 nanometers. To me it is stunning that such a cheap and common processing technology can be adapted (certainly with some very clever engineering) to produce nanostructured parts in this way. Early applications will be to parts with optical effects like holograms directly printed in, and more immediately microfluidic reactors for diagnostics and testing.

    The UK has a big presence here, and our stand has some very interesting exhibitors on it. I’ll single out Nanomagnetics which uses a naturally occurring protein to template the manufacture of magnetic nanoparticles with very precisely controlled sizes. These nanoparticles are then used either for high density data storage applications or for water purification, as removable forward osmosis agents. This is a great application of exploiting biological nanotechnology that very much is in accord with the philosophy outlined in my book Soft Machines; I should declare an interest in that I’ve just joined the scientific advisory board of this company.

    The UK government is certainly working hard to promote the interests of its nascent nanotechnology industry. Our stall is full of well-dressed and suave diplomats and civil servants. However, one of the small business exhibitors was muttering a little that if only they were willing to spend the money directly supporting the companies with no-strings contracts, as the US government is doing with companies like Nanosys, then maybe the UK’s prospects would be even brighter.

    If biology is so smart, how come it never invented the mobile phone/iPod/Ford Fiesta?

    Chris Phoenix, over on the CRN blog, in reply to a comment of mine, asked an interesting question that I replied at such length to that I feel moved to recycle it here. His question was, given that graphite is a very strong material, and given that graphite sheets of more than 200 carbon atoms have been synthesized with wet chemistry, why is it that life never discovered graphite? From this he questioned the degree to which biology could be claimed to have found optimum or near optimum solutions to the problems of engineering at the nanoscale. I answered his question (or at least commented on it) in three parts.

    Firstly, I don’t think that biology has solved all problems it faces optimally – it would be absurd to suggest this. But what I do believe is that the closer to the nanoscale one is, the more optimal the solutions are. This is obvious when one thinks about it; the problems of making nanoscale machines were the first problems biology had to solve, it had the longest to do it, and at this point the it was closest to starting from a clean slate. In evolving more complex structures (like the eye) biology has to coopt solutions that were evolved to solve some other problem. I would argue that many of the local maxima that evolution gets trapped in are actually near optimal solutions of nanotechnology problems that have to be sub-optimally adapted for larger scale operation. As single molecule biophysics progresses and indicates just how efficient many biological nanomachines are this view I think gets more compelling.

    Secondly, and perhaps following on from this, the process of optimising materials choice is very rarely, either in biology or human engineering, simply a question of maximising a single property like strength. One has to consider a whole variety of different properties, strength, stiffness, fracture toughness, as well as external factors such as difficulty of processing, cost (either in money for humans or in energy for biology), and achieve the best compromise set of properties to achieve fitness for purpose. So the question you should ask is, in what circumstances would the property of high strength be so valuable for an organism, particularly a nanoscale organism, that all other factors would be overruled. I can’t actually think of many, as organisms, particularly small ones, generally need toughness, resilience and self-healing properties rather than outright strength. And the strong and tough materials they have evolved (e.g. the shells of diatoms, spider silk, tendon) actually have pretty good properties for their purposes.

    Finally, don’t forget that strength isn’t really an intrinsic property of materials at all. Stiffness is determined by the strength of the bonds, but strength is determined by what defects are present. So you have to ask, not whether evolution could have developed a way of making graphite, but whether it could have developed a way of developing macroscopic amounts of graphite free of defects. The latter is a tall order, as people hoping to commercialise nanotubes for structural applications are going to find out. In comparison the linear polymers that biology uses when it needs high strength are actually much more forgiving, if you can work out how to get them aligned – it’s much easier to make a long polymer with no defects than it is to make a two or three dimensional structure with a similar degree of perfection.

    Lord of the Rings

    As light relief after the last rather dense post, here’s one of the of the sillier exchanges from Monday’s round-up of events at the British Association meeting:

    Quentin Cooper (compere of the event)
    – I noticed that one of the speakers described Drexler’s book “Engines of Creation” as the “Lord of the Rings” of nanotechnology, is that right?

    Me
    – No, Engines of Creation is “The Hobbit” of nanotechnology, it’s short, easy-to-read and everyone likes it. “Nanosystems” is “The Lord of the Rings”, it’s long, dense, half the world thinks it’s the best book ever written and the other half thinks it’s rubbish.

    Henry Gee (Nature magazine)
    – Are you sure it’s not the Silmarillion?

    Did Smalley deliver a killer blow to Drexlerian MNT?

    The most high profile opponent of Drexlerian nanotechnology (MNT) is certainly Richard Smalley; he’s a brilliant chemist who commands a great deal of attention because of his Nobel prize, and his polemics are certainly entertainingly written. He has a handy way with a soundbite, too, and his phrases “fat fingers” and sticky fingers” have become a shorthand expression of the scientific case against MNT. On the other hand, as I discussed below in the context of the Betterhumans article, I don’t think that the now-famous exchange between Smalley and Drexler delivered the killer blow against MNT that sceptics were hoping for.

    For my part, I am one of those sceptics; I’m convinced that the MNT project as laid out in Nanosystems will be very much more difficult than many of its supporters think, and that other approaches will be more fruitful. The argument for this is covered in my book Soft Machines. But, on the other hand, I’m not convinced that a central part of Smalley’s argument is actually correct. In fact, Smalley‚Äôs line of reasoning if taken to its conclusion would imply not only that MNT was impossible, but that conventional chemistry is impossible too.

    The key concept is the idea of an energy hypersurface embedded in a many-dimensional hyperspace, the dimensions corresponding to the degrees of freedom of the participating atoms in the reaction. Smalley argues that this space is so vast that it would be impossible for a robot arm or arms to guide the reaction along the correct path from reactants to products. This seems plausible enough on first sight – until one pauses to ask, what in an ordinary chemical reaction guides the system through this complex space? The fact that ordinary chemistry works – one can put a collection of reactants in a flask, apply some heat, and remove the key products (hopefully this will be your desired product in a respectable yield, with maybe some unwanted products of side-reactions as well) – tells us that in many cases the topography of the hypersurface is actually rather simple. The initial state of the reaction corresponds to a deep free energy minimum, the product of each reaction corresponds to another, similarly deep minimum, and connecting these two wells is a valley; this leads over a saddle-point, like a mountain pass, that defines the transition state. A few side-valleys correspond to the side-reactions. Given this simple topography, the system doesn’t need a guide to find its way through the landscape; it is strongly constrained to take the valley route over the mountain pass, with the probability of it taking an excursion to climb a nearby mountain being negligible. This insight is the fundamental justification of the basic theory of reaction kinetics that every undergraduate chemist learns. Elementary textbooks feature graphs with energy on one axis, and a “reaction coordinate” along the other; the graph shows a low energy starting point, a low energy finishing point, and an energy barrier in between. This plot encapsulates the implicit, and almost always correct, assumption that out of all the myriad of possible paths the system could take through the hyperspace of configuration space the only one that matters is the easy way, along the valley and over the pass.

    So if in ordinary chemistry the system can navigate its own way through hyperspace, what’s different in the world of Drexlerian mechanochemistry? Constraining the system by having the reaction take place on a surface and spatially localising one of the reactants will simplify the structure of the hyperspace by reducing the number of degrees of freedom. This makes life easier, not harder – surfaces of any kind generally have a strong tendency to have a catalytic effect – but nonetheless, the same basic considerations apply. Given a sensible starting point and a sensible desired product (i.e. one defined by a free energy minimum) chemistry teaches us that it is quite reasonable to hope for a topographically straightforward path through the energy landscape. As Drexler says, if the pathway isn’t straightforward you need to choose different conditions or different targets. You don’t need an impossible number of fingers to guide the system through configuration space for the same reason that you don’t need fingers in conventional chemistry, the structure of configuration space itself guides the way the system searches it.

    This is a technical and rather abstract argument. As always, the real test is experimental. There’s some powerful food for thought in the report on a Royal Society Discussion Meeting “‘Organizing atoms: manipulation of matter on the sub-10 nm scale'” which was published in the June 15 issue of Philosophical Transactions. Perhaps the most impressive example of a chemical reaction induced by physically moving individual reactants into place with an STM is the synthesis of biphenyl from two iodobenzene molecules (Hla et al, PRL 85 2777 (2001)). To use their concluding words “In conclusion, we have demonstrated that by employing the STM tip as an engineering tool on the atomic scale all
    steps of a chemical reaction can be induced: Chemical reactants can be prepared, brought together mechanically, and finally welded together chemically. ” Two caveats need to be added: firstly, the work was done at very low temperature (20 K) presumably so the molecules didn’t run around too much as a result of Brownian motion. Secondly, the reaction wasn’t induced simply by putting fragments together into physical proximity; the chemical state of the reactants had to be manipulated by the injection and withdrawal of electrons from the STM tip.

    Nonetheless, I rather suspect that this is exactly the sort of reaction that one would say wasn’t possible on the basis of Smalley’s argument.

    (Links in this post probably need subscriptions).

    Nanotechnology at the British Association

    The annual British Association meeting is the main science popularisation event in the UK, and not surprisingly nanotechnology got a fair bit of attention this year. The physics section ran a session on the subject yesterday morning. First up was Nigel Mason, who organised the physics part of the meeting this year and thus could give himself the best slot. He’s an atomic and molecular physicist who does scanning probe microscopy; his talk was a standard account of nanotechnology from the point of view of someone who’s got a scanning tunneling microscope and knows how to use it; from Feynman via the IBM logo and quantum corrals to some of his own stuff about imaging DNA. Next was Mark Welland, who runs the Nanotechnology Centre at Cambridge University. Once he’d calmed down after the first talk, which had upset him in all sorts of ways, not least by talking about Drexler in what he thought was an insufficiently critical way, he talked about his group’s work on silicon carbide nanowires, which if they do nothing else have produce some of the prettiest images to come out of current nanoscience. Then it was my turn. As Mark Welland said, making his excuses for leaving early, “I know what you’re going to talk about because I’ve read your book“.

    Harry Kroto, Nobel Laureate for his co-discovery (with Richard Smalley) of buckminster fullerene, was talking about nanotechnology in the chemistry section in the afternoon, but I didn’t get a chance to see it as I was roped into a rather tedious panel discussion about how the public perceives physicists. The final event for me was an appearance in a discussion event compered by the (excellent) BBC radio science journalist Quentin Cooper. This brought me the chance to share a platform with a poet, a paleontologist, and the government’s chief scientific advisor, Sir David King. We also got some free beer, though to Sir David’s horror this was bottles of (american) Budweiser rather than pints of bitter. So I got a final chance to make my nanotechnology pitch, though Quentin Cooper was rather more interested in trying to prise an unwise comment from the famously undiplomatic King. He happily confirmed that he still thought that global warming was a bigger threat than terrorism, he didn’t deny the suggestion that he’d received a rebuke from 10 Downing Street for saying this in the USA , where it’s language not thought suitable for a servant of the government of a loyal ally, and he was smilingly gnomic about who he wanted to win the US presidential election.

    The BA is all about publicity, so it’s worth asking how much interest this attention to nanotechnology stirred up. For my part, I think my talk got a good reaction, I signed the first copy of my book for a stranger, I did an interview for Radio New Zealand, and got the approval and interest of one of the BBCs best science journalists. And I now know who’s reviewing my book for Nature (Mark Welland). But I don’t think the subject really caught fire. Maybe a rather febrile summer of nanotechnology coverage has left media people starting to be a tiny bit bored with the word.