What biology does and doesn’t prove about nanotechnology

The recent comments by Alec Broers in his Reith Lecture about the feasibility or otherwise of the Drexlerian flavour of molecular nanotechnology have sparked off a debate that seems to have picked up some of the character of the British general election campaign (Liar! Vampire!! Drunkard!!! ). See here for Howard Lovy’s take, here for TNTlogs view. All of this prompted an intervention by Drexler himself (channeled through Howard Lovy), which was treated with less than total respect by TNTlog. Meanwhile, Howard Lovy visited Soft Machines to tell us that “when it comes to being blatantly political, you scientists are just as clumsy about it as any corrupt city politician I’ve covered in my career. The only difference is that you (I don’t mean you, personally) can sound incredibly smart while you lie and distort to get your way.” Time, I think (as a politician would say), to return to the issues.

Philip Moriarty, in his comment on Drexler’s letter, makes, as usual, some very important points about the practicalities of mechanosynthesis. Here I want to look at what I think is the strongest argument that supporters of radical nanotechnologies have, the argument that the very existence of the amazing contrivances of cell biology shows us that radical nanotechnology must be possible. I’ve written on this theme often before (for example here), but it’s so important it’s worth returning to.

In Drexler’s own words, in this essay for the AAAS, “Biology shows that molecular machines can exist, can be programmed with genetic data, and can build more molecular machines”. This argument is clearly absolutely correct, and Drexler deserves credit for highlighting this important idea in his book Engines of Creation. But we need to pursue the argument a little bit further than the proponents of molecular manufacturing generally take it.

Cell biology shows us that it is possible to make sophisticated molecular machines that can operate, in some circumstances, with atomic precision, and which can replicate themselves. What it does not show is that the approach to making molecular machines outlined in Drexler’s book Nanosystems, an approach that Drexler describes in that book as “the principles of mechanical engineering applied to chemistry”, will work. The crucial point is that the molecular machines of biology work on very different principles to those used by our macroscopic products of mechanical engineering. This is much clearer now than it was when Engines of Creation was written, because in the ensuing 20 years there’s been spectacular progress in structural biology and single molecule biophysics; this progress has unravelled the operating details of many biological molecular machines and has allowed us to understand much more deeply the design philosophy that underlies them. I’ve tried to explain this design philosophy in my book Soft Machines; for a much more technical account, with full mathematical and physical details, the excellent textbook by Phil Nelson, Biological Physics: Energy, Information, Life, is the place to go.

Where Drexler takes the argument next is to say that, if nature can achieve such marvelous devices using materials whose properties, constrained by the accidents of evolution, are far from optimal, and using essentially random design principles, then how much more effective will our synthetic nano-machines be. We can use hard, stiff materials like diamond, rather than the soft, wet and jelly-like components of biology, and we can use the rationally designed products of a mechanical engineering approach rather than the ramshackle and jury-rigged contrivances of biology. In Drexler’s own words, we can expect “molecular machine systems that are as far from the biological model as a jet aircraft is from a bird, or a telescope is from an eye”.

There’s something wrong with this argument, though. The shortcomings of biological design are very obvious at the macroscopic scale – 747s are more effective at flying than crows, and, like many over-40 year olds, I can personally testify to the inadequacy of the tendon arrangements in the knee-joint. But the smaller we go in biology, the better things seem to work. My favourite example of this is ATP-synthase. This remarkable nanoscale machine is an energy conversion device that is shared by living creatures as different as bacteria and elephants (and indeed, ourselves). It converts the chemical energy of a hydrogen ion gradient, first into mechanical energy of rotation, and then into chemical energy again, in the form of the energy molecular ATP, and it does this with an efficiency approaching 100%.

Why does biology work so well at the nanoscale? I think the reason is related to the by now well-known fact that physics looks very different on the nanoscale than it does at the macroscale. In the environment we live in – with temperatures around 300 K and a lot of water around – what dominates the physics of the nanoscale is ubiquitous Brownian motion (the continuous jostling of everything by thermal motion), strong surface forces (which tend to make most things stick together), and, in water, the complete dominance of viscosity over inertia, making water behave at the nanoscale in the way molasses behaves on human scales. The kind of nanotechnology biology uses exploits these peculiarly nanoscale phenomena. It uses design principles which are completely unknown in the macroscopic world of mechanical engineering. These principles include self-assembly, in which strong surface forces and Brownian motion combine to allow complex structures to form spontaneously from their component parts. The lack of stiffness of biological molecules, and the importance of Brownian motion in continuously buffeting them, is exploited in the principle of molecular shape change as a mechanism for doing mechanical work in the molecular motors that make our muscles function. These biological nanomachines are exquisitely optimised for the nanoscale world in which they operate.

It’s important to be clear that I’m not accusing Drexler of failing to appreciate the importance of nanoscale phenomena like Brownian motion; they’re treated in some detail in Nanosystems. But the mechanical engineering approach to nanotechnology – the Nanosystems approach – treats these phenomena as problems to be engineered around. Biology doesn’t engineer around them, though, it’s found ways of exploiting them.

My view, then, is that the mechanical engineering approach to nanotechnology that underlies MNT is less likely to succeed than an approach that seeks to emulate the design principles of nature. MNT is working against the grain of nanoscale physics, while the biological approach – the soft, wet, and flexible approach, works with the grain of the way the nanoscale works. Appealing to biology to prove the possibility of radical nanotechnology of some kind is absolutely legitimate, but the logic of this argument doesn’t lead to MNT.

Radio Nanotechnology

The BBC’s spoken word radio station, Radio 4, is giving nanotechnology full billing at the moment (perhaps they are getting bored with the election). In addition to last night’s Reith Lecture, given by Lord Broers, the consumer program You and Yours covered the subject in some depth this lunchtime (listen to it here).

The piece included a long interview with Ann Dowling, chair of the Royal Society Report, a walkround the Science Museum exhibition – Nanotechnology: small science, big deal, an interview with Erik van der Linden from Wageningen Agricultural University in the Netherlands, talking about nanotechnology in food, mostly in the context of converting plant protein into meat substitutes, and encapsulation of nutriceuticals and flavours. There was, of course, a spokesman from Nanotex telling us all about stain resistant trousers.

What there was no mention at all of was molecular manufacturing. I rather suspect that this will be interpreted in some quarters as a conspiracy of silence.

Politics and the National Nanotechnology Initiative

The view that the nanobusiness and nanoscience establishment has subverted the originally intended purpose of the USA’s National Nanotechnology Initiative has become received wisdom amongst supporters of the Drexlerian vision of MNT. According to this reading of nanotechnology politics,
any element of support for Drexler’s vision for radical nanotechnology has been stripped out of the NNI to make it safe for mundane near-term applications of incremental nanotechnology like stain resistant fabric. This position is succintly expressed in this Editorial in the New Atlantis, which makes the claim that the legislators who supported the NNI did so in the belief that it was the Drexlerian vision that they were endorsing.

A couple of points about this position worry me. Firstly, we should be very clear that there is a very important dividing line in the relationship between science and politics that any country should be very wary of crossing. In a democratic country, it’s absolutely right that the people’s elected representatives should have the final say about what areas of science and technology are prioritised for public spending, and indeed what areas of science are left unpursued. But we need to be very careful to make sure that this political oversight of science doesn’t spill over into ideological statements about the validity of particular scientific positions. If supporters of MNT were to argue that the government should overrule the judgement of the scientific community about what approach to radical nanotechnology is most likely to work on what are essentially ideological grounds, then I’d suggest they recall the tragic and unedifying history of similar interventions in the past. Biology in the Soviet Union was set back for a generation by Lysenko, who, unable to persuade his colleagues of the validity of his theory of genetics, appealed directly to Stalin. Such perversions aren’t restricted to totalitarian states; Edward Teller used his high level political connections to impose his vision of the x-ray laser on the USA’s defense research establishment, in the face of almost universal scepticism from other physicists. The physicists were right, and the program was abandoned, but not before the waste of many billions of dollars.

But there’s a more immediate criticism of the theory that the NNI has been highjacked by nanopants. This is that it’s not right, even from the point of view of supporters of Drexler. The muddle and inconsistency comes across most clearly on the Center for Responsible Nanotechnology’s
blog. While this entry strongly endorses the New Atlantis line, this entry only a few weeks earlier expresses the opinion that the most likely route to radical nanotechnology will come through wet, soft and biomimetic approaches. Of course, I agree with this (though my vision of what radical nanotechnology will look like is very different from that of supporters of MNT); it is the position I take in my book Soft Machines; it is also, of course, an approach recommended by Drexler himself. Looking across at the USA, I see some great and innovative science being done along these lines. Just look at the work of Ned Seeman, Chad Mirkin, Angela Belcher or Carlo Montemagno, to take four examples that come immediately to mind. Who is funding this kind of work? It certainly isn’t the Foresight Institute – no, it’s all those government agencies that make up the much castigated National Nanotechnology Initiative.

Of course, supporters of MNT will say that, although this work may be moving in the direction that they think will lead to MNT, it isn’t been done with that goal explicitly stated. To this, I would simply ask whether it isn’t a tiny bit arrogant of the MNT visionaries to think that they are in a better position to predict the outcome of these lines of inquiry than the people who are actually doing the research.

Whenever science funding is allocated, there is a real tension between the short-term and the long-term, and this is a legitimate bone of contention between politicians and legislators, who want to see immediate results in terms of money and jobs for the people they represent, and scientists and technologists with longer term goals. If MNT supporters were simply to argue that the emphasis of the NNI should be moved away from incremental applications towards longer term, more speculative research, then they’d find a lot of common cause with many nanoscientists. But it doesn’t do anyone any good to confuse these truly difficult issues with elaborate conspiracy theories.

Politics in the UK

Some readers may have noticed that we are in the middle of an election campaign here in the UK. Unsurprisingly, science and technology have barely been mentioned at all by any of the parties, and I don’t suppose many people will be basing their voting decisions on science policy. It’s nonetheless worth commenting on the parties’ plans for science and technology.

I discussed the Labour Party’s plans for science for the next three years here – this foresees significant real-terms increases in science funding. The Conservative Party has promised to “at least match the current administration’s spending on science, innovation and R&D”. However, the Conservative’s spending plans are predicated on finding ��35 billion in “efficiency savings”, of which ��500 million is going to come from reforming the Department of Trade and Industry’s business support programmes. I believe it is under this heading that the ��200 million support for nanotechnology discussed here comes from, so I think the status of these programmes in a Conservative administration would be far from assured. The Liberal Democrats take a simpler view of the DTI – they just plan to abolish it, and move science to the Department for Education.

So, on fundamental science support, there seems to be a remarkable degree of consensus, with no-one seeking to roll back the substantial increases in science spending that the Labour Party has delivered. The arguments really are on the margins, about the role of government in promoting applied and near-market research in collaboration with industry. I have many very serious misgivings about the way in which the DTI has handled its support for micro- and nano- technology. In principle, though, I do think it is essential that the UK government does provide such support to businesses, if only because all other governments around the world (including, indeed perhaps especially, the USA) practise exactly this sort of interventionist policy.

Paint-on lasers and land-mine detection

One of the many interesting features of semiconducting polymers is that they can be made to lase. By creating a population of excited electronic states, a situation can be achieved whereby light is amplified by the process of stimulated emission, giving rise to an intense beam of coherent light. Because semiconducting polymers can be laid down in a thin film from a simple solution, it’s tempting to dream of lasers that are fabricated by simple and cheap processes, like printing, or are simply painted on to a surface. The problem with this is that, so far (and as far as I know), the necessary population of excited states has only been achieved by illuminating the material with another laser. This optical pumping, as it is called, is obviously less useful than the situation where the laser can be pumped electrically, as is the case in the kind of inorganic semiconductor lasers that are now everyday items in CD and DVD players. But a paper in this week’s Nature (abstract free, subscription required for full article) demonstrates another neat use for lasing action in semiconducting polymers – as an ultrasensitive detector for explosives. See also this press release.

The device relies on the fact that lasing is a highly non-linear effect; if an optically-pumped polymer laser is exposed to a material which influences only a few molecules at its surface, this can still kill the lasing action entirely. The molecule that is being used in this work, done at MIT by Timothy Swager’s group, is particularly sensitive to the explosive TNT. This device can work as a sensor that would be sensitive enough (and this needs to be in the parts per billion range) to detect the tiny traces of TNT vapour that a buried land-mine would emit.

This work, rather unsurprisingly, is supported by MIT’s Institute for Soldier Nanotechnologies. The development of these ultrasensitive sensors for the detection of chemicals in the environment forms a big part of the research effort in evolutionary nanotechnology. On the science side, this is driven by the fact that detecting the effects of molecules interacting with surfaces is intrinsically a lot easier in systems with nanoscaled components, simply because the surface in a nanostructured device has a great deal more influence on its properties than it would in a bulk material. On the demand side, the needs of defense and homeland security are, now more than ever, setting the research agenda in the USA.

Nobel Laureates Against Nanotechnology

This small but distinguished organisation has gained another two members. The theoretical condensed matter physicist Robert Laughlin, in his new book A Different Universe: reinventing physics from the bottom down, has a rather scathing assessment of nanotechnology, with which Philip Anderson (who is himself a Nobel Laureate and a giant of theoretical physics), reviewing the book in Nature(subscription required), concurs. Unlike Richard Smalley, Laughlin’s criticism is directed at the academic version of nanotechnology, rather than the Drexlerian version, but adherents of the latter shouldn’t feel too smug because Laughlin’s criticism applies with even more force to their vision. He blames the seductive power of reductionist belief for the delusion: “The idea that nanoscale objects ought to be controllable is so compelling it blinds a person to the overwhelming evidence that they cannot be”.

Nanotechnologists aren’t the only people singled out for Laughlin’s scorn. Other targets include quantum computing, string theory (“the tragic consequence of an obsolete belief system”) and most of modern biology (“an endless and unimaginably expensive quagmire of bad experiments”). But underneath all the iconoclasm and attitude (and personally I blame Richard Feynman for making all American theoretical physicists want to come across like rock stars), is a very serious message.

Laughlin’s argument is that reductionism should be superseded as the ruling ideology of science by the idea of emergence. To quote Anderson “The central theme of the book is the triumph of emergence over reductionism: that large objects such as ourselves are the product of principles of organization and of collective behaviour that cannot in any meaningful sense be reduced to the behaviour of our elementary constituents.” The origin of this idea is Anderson himself, in a widely quoted article from 1971 – More is different. In this view, the idea that physics can find a “Theory of Everything” is fundamentally wrong-headed. Chemistry isn’t simply the application of quantum mechanics, and biology is not simply reducible to chemistry; the organisation principles that underlie, say, the laws of genetics, are just as important as the properties of the things being organised.

Anderson’s views on emergence aren’t as widely known as they should be, in a world dominated by popular science books on string theory and “the search for the God particle”. But they have been influential; an intervention by Anderson is credited or blamed by many people for killing off the Superconducting Supercollider project, and he is one of the founding fathers of the field of complexity. Laughlin explicitly acknowledges his debt to Anderson, but he holds to a particularly strong version of emergence; it isn’t just that there are difficulties in practise in deriving higher level laws of organisation from the laws describing the interactions of their parts. Because the organisational principles themselves are more important than the detailed nature of the interactions between the things being organised, the reductionist program is wrong in principle, and there’s no sense in which the laws of quantum electrodynamics are more fundamental than the laws of genetics (in fact, Laughlin argues on the basis of the strong analogies between QED and condensed matter field theory that QED itself is probably emergent). To my (philosophically untrained) eye, this seems to put Laughlin’s position quite close to that of the philosopher of science Nancy Cartwright. There’s some irony in this, because Cartwright’s book The Dappled World was bitterly criticised by Anderson himself.

This takes us a long way from nanoscience and nanotechnology. It’s not that Laughlin believes that the field is unimportant; in fact he describes the place where nanoscale physics and biology meets as being the current frontier of science. But it’s a place that will only be understood in terms of emergent properties. Some of these, like self-assembly, are starting to be understood, but many others are not. But what is clear is that the reductionist approach of trying to impose simplicity where it doesn’t exist in nature simply won’t work.

Nanotechnology and the developing world

There’s a rather sceptical commentary from Howard Lovy about a BBC report on a study from Peter Singer and coworkers. At the centre of the report is a list of areas in which the authors feel that nanotechnology can make positive contributions to the developing world. Howard’s piece attracted some very sceptical comments from Jim Thomas, of the ETC Group. Jim is very suspicious of high-tech “solutions” to the problems of the developing world which don’t take account of local cultures and conditions. In particular, he sees the role of multinational companies as being particularly problematic, especially with regard to issues of ownership, control and intellectual property.

I see the problem of multinational companies in rather different terms. To take a concrete example, I’d cited the case of insecticide-treated mosquito nets for the control of malaria as a place where nanoscale technology could make a direct impact (and Jim did seem to agree, with some reservations, that this in could, in some circumstances, be an appropriate solution). The technical problem with insecticide treated mosquito nets is that the layer of active material isn’t very robustly attached, and the effectiveness of the nets falls away too rapidly with time, and even more rapidly when the nets are washed. One solution is to use micro- or nano-encapsulation of the insecticide to achieve long-lasting controlled release. The necessary technology to do this is being developed in agrochemical multinationals. The problem, though, is that their R&D efforts are steered by the monetary size of the markets they project. They’d much rather develop termite defenses for wealthy suburbanites in Florida than mosquito nets. The problem, then, isn’t that these multinationals will impose technical fixes on the developing world, it’s that they’ll just ignore the developing world entirely and potentially valuable technologies simply won’t reach the places where they could do some good.

To overcome this market failure needs intervention from governments, foundations and NGOs, as well as some active and informed technology brokering. Looking at it in this light, it seems to me that the Singer paper is a useful contribution.

How are we doing?

Howard Lovy’s Nanobot draws attention to an interesting piece in SciDevNet discussing bibliometric measures of the volume and impact of nanotechnology research in various parts of the world. This kind of measurement – in which databases are used to count numbers of papers published and the number of times such papers are cited by other papers – is currently very popular among governments attempting to assess whether the investments they make in science are worthwhile. I was shown a similar set of data about the UK, commissioned by the Engineering and Physical Science Research Council, at a meeting last week. The attractions of this kind of analysis are obvious, because it is relatively easily commissioned and done, and it yields results that can be plotted in plausible and scientific looking graphs.

The drawbacks perhaps are less obvious, but are rather serious. How do you tell what papers are actually about nanotechnology, given the difficulties of defining the subject? The obvious thing to do is to search for papers with “nano” in the title or abstract somewhere – this is what the body charged with evaluating the USA’s National Nanotechnology Initiative have done. What’s wrong with this is that many of the best papers on nanotechnology simply don’t feel the need to include the nano- word in their title. Why should they? The title tells us what the paper is about, which is generally a much more restricted and specific subject than this catch-all word. I’ve been looking up papers on single molecule electronics today. I’d have thought that everyone would agree that the business of trying to measure the electrical properties of single molecules, one at a time, and wiring them up to make ultra-miniaturised electronic devices, was as hardcore as nanotechnology comes. But virtually none of the crucial papers on the subject over the last five years would have shown up on such a search.

The big picture that these studies are telling us does ring true; the majority of research in nanoscience and nanotechnology is done outside the USA, and this kind of research in China has been growing exponentially in both volume and impact in recent years. But we shouldn’t take the numbers too seriously; if we do, it’s only a matter of time before some science administrator realises that the road to national nanotechnology success is simply to order all the condensed matter physicists, chemists and materials scientists to stick “nano-” somewhere in the titles of all their papers.

The BBC’s Reith Lectures cover nanotechnology

Every year the BBC broadcasts a series of radio lectures on some rather serious subject, given by an appropriately weighty public intellectual. This year’s series is called “The triumph of technology”, and the fourth lecture (to be broadcast at 8 pm on the 27th April), is devoted to nanotechnology and nanoscience. The lecturer is Lord Broers, who certainly qualifies as a prominent member of that class that the British call the Great and Good. He’s recently stepped down from being Vice-Chancellor of Cambridge University, he’s President of the Royal Academy of Engineering, and is undoubtedly an ornament to a great number of important committees. But what’s interesting is that he does describe himself as a nanotechnologist. His early academic work was on scanning electron microscopy and e-beam lithography, and before returning to academia he did R&D for IBM.

The introductory lecture – Technology will Determine the Future of the Human Race – has already been broadcast; you can read the text or download an MP3 from the BBC website. This first lecture is rather general, so it will be interesting to see if he develops any of his themes in more unexpected directions.

Disentangling thin polymer films

Many of the most characteristic properties of polymer materials like plastics come from the fact that their long chain molecules get tangled up. Entanglements between different polymer chains behave like knots, which make a polymer liquid behave like a solid over quite perceptible time scales, just like silly putty. The results of a new experiment show that when you make the polymer film very thin – thinner than an individual polymer molecule – the chains become less entangled with each other, with significant effects on their mechanical properties.

The experiments are published in this weeks Physical Review Letters; I’m a co-author but the main credit lies with my colleagues Lun Si, Mike Massa and Kari Dalnoki-Veress at McMaster University, Canada. The abstract is here, and you can download the full paper as a PDF (this paper is copyright the American Physical Society and is available here under the author rights policy of the APS).

This is the latest in a whole series of discoveries of ways in which the properties of polymer films dramatically change when their thicknesses fall towards 10 nm and below. Another example is the discovery that the glass transition temperature of polymer films – the temperature at which a polymer like polystyrene changes from a glassy solid to a gooey liquid – dramatically decreases in thin films. So a material that would in the bulk be a rigid solid may, in a thin enough film, turn into a much less rigid, liquid-like layer (see this technical presentation for more details). Why does this matter? Well, one reason is that, as feature sizes in the microelectronics industry fall below 100 nm, the sharpness with which one can define a line in a thin film of a polymer resist could limit the perfection of the features one is making. So the fact that the mechanical properties of the polymer themselves change, purely as a function of size, could lead to problems.