Re-reading Feynman – Part 2

In part 1 of this series I talked about the growing importance of Richard Feynman’s famous lecture There’s plenty of room at the bottom as a foundational document for nanotechnology of all flavours, and hinted at the tensions that arise as different groups claim Feynman’s vision as an endorsement for their own particular views. Here I want to go back to Feynman’s own words to try and unpick exactly what Feynman’s vision was, and how it looks more than forty years on.

Feynman’s lecture actually covers a number of different topics related to miniaturisation. We can break up the lecture into a number of themes:

  • Writing small
  • Feynman starts with the typically direct and compelling question “Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?” Simple arithmetic convinces us that this is possible in principle; using a pixel size of 8 nm gives us enough resolution. So how in practise can it be done? Reading such small writing is no problem, and would have been possible even with the electron microscopy techniques available in 1959. Writing on this scale is more challenging, and Feynman threw out some ideas about using focused electron and ion beams. Although Feynman didn’t mention it, the basic work to enable this was already in progress at the time he was speaking. Cambridge was one of the places at which the scanning electron microscope was being developed (history here), and only a year a two later the first steps were being made in using focused beams to make tiny structures. The young graduate student who worked on this was the same Alec Broers who (now enobled) recently attracted the wrath of Drexler. This was the beginning of the technique of electron-beam lithography, now the most widely used method of making nanoscale structures in industry and academia.

  • Better microscopes
  • Electron microscopes in 1959 couldn’t resolve features smaller than 1 nm. This is impressively small, but it was still not quite good enough to see individual atoms. Feynman knew that there were no fundamental reasons preventing the resolution of electron microscopes being improved by a factor of 100, and he identified the problem that needed to be overcome (the numerical aperture of the lenses). Feynman’s goal of obtaining sub-atomic resolution in electron microscopes has now been achieved, but for various rather interesting reasons this development has had less impact than he anticipated.

    Feynman, above all, saw microscopy with sub-atomic resolution as a direct way of solving the mysteries of biology. “It is very easy to answer many of these fundamental biological questions; you just look at the thing! You will see the order of bases in the [DNA] chain; you will see the structure of the microsome”. But although microscopes are 100 times better, we still can’t directly sequence DNA microscopically. It turns out that the practical resolution isn’t limited by the instrument, but by the characteristics of biological molecules – particularly their tendency to get damaged by electron beams. This situation hasn’t been materially altered by the remarkable and exciting discovery of a whole new class of microscopy techniques with the potential to achieve atomic resolution – the scanning probe techniques like scanning tunneling microscopy and atomic force microscopy. Meanwhile many of the problems of structural biology have been solved, not by microscopy, but by x-ray diffraction.

  • Miniaturising the computer
  • The natural reaction of anyone under forty reading this section is shock, and that’s a measure of how far we’ve come since 1959. Feynman writes “I do know that computing machines are very large; they fill rooms” … younger readers need to be reminded that the time when a computer wasn’t a box on a desktop or a slab on a laptop is within living memory. In discussing the problems of making a computer powerful enough to solve a difficult problem like recognizing a face, Feynman comments ” there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing”. Now our transistors are made of silicon, but more importantly they aren’t discrete elements that need to be soldered together, they are patterned on a single piece of silicon as part of a planar integrated circuit. It’s this move to this new kind of manufacturing, based on a combination of lithographic patterning, etching and depositing very thin layers, that has permitted the extraordinary progress in the miniaturization of computers

    Feynman asks “Why can’t we manufacture these small computers somewhat like we manufacture the big ones?” The question has been superseded, to some extent, by the discovery of this better way of doing things. This discovery was already in sight at the time Feynman was writing; the two crucial patents for integrated circuits were filed by Jack Kilby and Robert Noyce early in 1959, but their significance didn’t become apparent for a few more years. This has been so effective that Feynman’s miniaturization goal – “the circuits should be a few thousand angstroms across” – has already been met, not just in the laboratory, but in consumer goods costing a few hundred dollars apiece.

    So far, then, we can see that much of Feynman’s vision has actually been realised, though some things haven’t worked out the way he anticipated. In the next section of this series I’ll consider what he said about miniature machines and rearranging matter atom by atom. It’s here, of course, that the controversy over Feynman’s legacy becomes most pointed.

    Which bits of nanotechnology does ETC now oppose?

    Checking out the website of the anti-nanotechnology campaigning group ETC, I see that their position on nanotechnology seems to have subtly changed. A press release dated November 23 2004 says “In 2002, ETC called for a moratorium on the commercialisation of new nano-scale materials until laboratory protocols and regulatory regimes are in place that take into account the special characteristics of these materials, and until they are shown to be safe”. But currently their website calls for a rather different moratorium: “The ETC group believes that a moratorium should be placed on research involving molecular self-assembly and self-replication.”

    I wonder what they mean by this? If it is Drexlerian self-replicating nanobots they are talking about, then the nanobusiness and nanoscience communities will no doubt cheerfully agree with them. But the usual understanding of the term molecular self-assembly is that it refers to the propensity of natural and synthetic molecules like soaps, proteins and block copolymers to arrange themselves, under the influence of Brownian motion and programmed patterns of molecular stickiness, into well defined nanostructures. This is certainly an important theme in nanoscience and technology today – there’s a chapter devoted to the subject in Soft Machines. As a principle that’s extensively exploited in biology self-assembly exemplifies the powerful approach to nanotechnology that learns lessons from nature. But it’s difficult to see that it has any particularly sinister or dangerous overtones, and its use in technology isn’t at all novel. Every bar of soap or bottle of shampoo depends on self-assembly to give it its unique properties, and the thermoplastic elastomers and polyurethane foams that are used in the soles of many shoes and trainers actually have quite complex self-assembled nanostructures. So just what is ETC opposing here?

    The Rat-on-a-chip

    I’ve written a number of times about the way in which the debate about the impacts of nanotechnology has been highjacked by the single issue of nanoparticle toxicity, to the detriment of more serious and interesting longer term issues, both positive and negative. The flippant title of this post on the subject – Bad News for Lab Rats – conceals the fact that, while I don’t oppose animal experiments in principle, I’m actually a little uncomfortable about the idea that large numbers of animals should be sacrificed in badly thought out and possibly unnecessary toxicology experiments. So I was very encouraged to read this news feature in Nature (free summary, subscription required for full article) about progress in using microfluidic devices containing cell cultures for toxiological and drug testing. The article features work from Michael Shuler’s group at Cornell, and a company founded by Shuler’s colleague Gregory Baxter, Hurel Corp.

    Re-reading Feynman (part 1)

    Every movement has its founding texts; for nanotechnology there’s general agreement that Richard Feynman’s lecture There’s plenty of room at the bottom is where the subject started, at least as a concept. The lecture is more than forty years old, but I sense that its perceived significance has been growing in recent years. Not least of the reasons for this is that, as the rift between the mainstream of academic and commercial nano- science and technology and the supporters of Drexler has been growing, both sides, for different reasons, find it convenient to emphasis the foundational role of Richard Feynman. Drexler himself often refers to his vision of nanotechnology as the “Feynman vision”, thus explicitly claiming the endorsement of someone many regard as the greatest native-born American scientist of all time. For mainstream nanoscientists, on the other hand, increasing the prominence given to Feynman has the welcome side-effect of diminishing the influence of Drexler.

    Many such founding documents easily slip into the category of papers that are “much-cited, but seldom read”, particularly when they were published in obscure publications that aren’t archived on the web. Feynman’s lecture is easily available, so there’s no excuse for this fate befalling it now. Nonetheless, one doesn’t often read very much about what Feynman actually said. This is a pity, not because his predictions of the future were flawless, nor because he presented a coherent plan that nanotechnologists today should be trying to follow. Feynman was a brilliant theoretical physicist observing science and technology as it was in 1959. It’s fascinating, as we try to grope towards an understanding of where technology might lead us in the next forty years, to look back at these predictions and suggestions. Some of what he predicted has already happened, to an extent that probably would have astonished him at the time. In other cases, things haven’t turned out the way he thought they would. We’ve seen some spectacular breakthroughs that were completely unanticipated. Finally, Feynman suggested some directions that as yet have not happened, and whose feasibility isn’t yet established. In my next post in this series, I’ll use the luxury of hindsight to look in detail at Plenty of Room at the Bottom, to ask just how well Feynman’s predictions and hunches have stood the test of time.

    The Guardian Hay Festival

    The UK’s major literary festival of literature takes place in the charming Welsh border town of Hay-on-Wye between 27 April and 5 June. The program of the Guardian Hay Festival 2005 is dominated, as usual, by literary heavyweights like Kazuo Ishiguro, Ian McEwan, Dave Eggers and Jeanette Winterson. There’s room for a little bit of science, though, and I’ll be talking about my book Soft Machines at 10 am on Wednesday 1 June.

    Thanks to TNTlog for highlighting this last week.

    What biology does and doesn’t prove about nanotechnology

    The recent comments by Alec Broers in his Reith Lecture about the feasibility or otherwise of the Drexlerian flavour of molecular nanotechnology have sparked off a debate that seems to have picked up some of the character of the British general election campaign (Liar! Vampire!! Drunkard!!! ). See here for Howard Lovy’s take, here for TNTlogs view. All of this prompted an intervention by Drexler himself (channeled through Howard Lovy), which was treated with less than total respect by TNTlog. Meanwhile, Howard Lovy visited Soft Machines to tell us that “when it comes to being blatantly political, you scientists are just as clumsy about it as any corrupt city politician I’ve covered in my career. The only difference is that you (I don’t mean you, personally) can sound incredibly smart while you lie and distort to get your way.” Time, I think (as a politician would say), to return to the issues.

    Philip Moriarty, in his comment on Drexler’s letter, makes, as usual, some very important points about the practicalities of mechanosynthesis. Here I want to look at what I think is the strongest argument that supporters of radical nanotechnologies have, the argument that the very existence of the amazing contrivances of cell biology shows us that radical nanotechnology must be possible. I’ve written on this theme often before (for example here), but it’s so important it’s worth returning to.

    In Drexler’s own words, in this essay for the AAAS, “Biology shows that molecular machines can exist, can be programmed with genetic data, and can build more molecular machines”. This argument is clearly absolutely correct, and Drexler deserves credit for highlighting this important idea in his book Engines of Creation. But we need to pursue the argument a little bit further than the proponents of molecular manufacturing generally take it.

    Cell biology shows us that it is possible to make sophisticated molecular machines that can operate, in some circumstances, with atomic precision, and which can replicate themselves. What it does not show is that the approach to making molecular machines outlined in Drexler’s book Nanosystems, an approach that Drexler describes in that book as “the principles of mechanical engineering applied to chemistry”, will work. The crucial point is that the molecular machines of biology work on very different principles to those used by our macroscopic products of mechanical engineering. This is much clearer now than it was when Engines of Creation was written, because in the ensuing 20 years there’s been spectacular progress in structural biology and single molecule biophysics; this progress has unravelled the operating details of many biological molecular machines and has allowed us to understand much more deeply the design philosophy that underlies them. I’ve tried to explain this design philosophy in my book Soft Machines; for a much more technical account, with full mathematical and physical details, the excellent textbook by Phil Nelson, Biological Physics: Energy, Information, Life, is the place to go.

    Where Drexler takes the argument next is to say that, if nature can achieve such marvelous devices using materials whose properties, constrained by the accidents of evolution, are far from optimal, and using essentially random design principles, then how much more effective will our synthetic nano-machines be. We can use hard, stiff materials like diamond, rather than the soft, wet and jelly-like components of biology, and we can use the rationally designed products of a mechanical engineering approach rather than the ramshackle and jury-rigged contrivances of biology. In Drexler’s own words, we can expect “molecular machine systems that are as far from the biological model as a jet aircraft is from a bird, or a telescope is from an eye”.

    There’s something wrong with this argument, though. The shortcomings of biological design are very obvious at the macroscopic scale – 747s are more effective at flying than crows, and, like many over-40 year olds, I can personally testify to the inadequacy of the tendon arrangements in the knee-joint. But the smaller we go in biology, the better things seem to work. My favourite example of this is ATP-synthase. This remarkable nanoscale machine is an energy conversion device that is shared by living creatures as different as bacteria and elephants (and indeed, ourselves). It converts the chemical energy of a hydrogen ion gradient, first into mechanical energy of rotation, and then into chemical energy again, in the form of the energy molecular ATP, and it does this with an efficiency approaching 100%.

    Why does biology work so well at the nanoscale? I think the reason is related to the by now well-known fact that physics looks very different on the nanoscale than it does at the macroscale. In the environment we live in – with temperatures around 300 K and a lot of water around – what dominates the physics of the nanoscale is ubiquitous Brownian motion (the continuous jostling of everything by thermal motion), strong surface forces (which tend to make most things stick together), and, in water, the complete dominance of viscosity over inertia, making water behave at the nanoscale in the way molasses behaves on human scales. The kind of nanotechnology biology uses exploits these peculiarly nanoscale phenomena. It uses design principles which are completely unknown in the macroscopic world of mechanical engineering. These principles include self-assembly, in which strong surface forces and Brownian motion combine to allow complex structures to form spontaneously from their component parts. The lack of stiffness of biological molecules, and the importance of Brownian motion in continuously buffeting them, is exploited in the principle of molecular shape change as a mechanism for doing mechanical work in the molecular motors that make our muscles function. These biological nanomachines are exquisitely optimised for the nanoscale world in which they operate.

    It’s important to be clear that I’m not accusing Drexler of failing to appreciate the importance of nanoscale phenomena like Brownian motion; they’re treated in some detail in Nanosystems. But the mechanical engineering approach to nanotechnology – the Nanosystems approach – treats these phenomena as problems to be engineered around. Biology doesn’t engineer around them, though, it’s found ways of exploiting them.

    My view, then, is that the mechanical engineering approach to nanotechnology that underlies MNT is less likely to succeed than an approach that seeks to emulate the design principles of nature. MNT is working against the grain of nanoscale physics, while the biological approach – the soft, wet, and flexible approach, works with the grain of the way the nanoscale works. Appealing to biology to prove the possibility of radical nanotechnology of some kind is absolutely legitimate, but the logic of this argument doesn’t lead to MNT.

    Radio Nanotechnology

    The BBC’s spoken word radio station, Radio 4, is giving nanotechnology full billing at the moment (perhaps they are getting bored with the election). In addition to last night’s Reith Lecture, given by Lord Broers, the consumer program You and Yours covered the subject in some depth this lunchtime (listen to it here).

    The piece included a long interview with Ann Dowling, chair of the Royal Society Report, a walkround the Science Museum exhibition – Nanotechnology: small science, big deal, an interview with Erik van der Linden from Wageningen Agricultural University in the Netherlands, talking about nanotechnology in food, mostly in the context of converting plant protein into meat substitutes, and encapsulation of nutriceuticals and flavours. There was, of course, a spokesman from Nanotex telling us all about stain resistant trousers.

    What there was no mention at all of was molecular manufacturing. I rather suspect that this will be interpreted in some quarters as a conspiracy of silence.

    Politics and the National Nanotechnology Initiative

    The view that the nanobusiness and nanoscience establishment has subverted the originally intended purpose of the USA’s National Nanotechnology Initiative has become received wisdom amongst supporters of the Drexlerian vision of MNT. According to this reading of nanotechnology politics,
    any element of support for Drexler’s vision for radical nanotechnology has been stripped out of the NNI to make it safe for mundane near-term applications of incremental nanotechnology like stain resistant fabric. This position is succintly expressed in this Editorial in the New Atlantis, which makes the claim that the legislators who supported the NNI did so in the belief that it was the Drexlerian vision that they were endorsing.

    A couple of points about this position worry me. Firstly, we should be very clear that there is a very important dividing line in the relationship between science and politics that any country should be very wary of crossing. In a democratic country, it’s absolutely right that the people’s elected representatives should have the final say about what areas of science and technology are prioritised for public spending, and indeed what areas of science are left unpursued. But we need to be very careful to make sure that this political oversight of science doesn’t spill over into ideological statements about the validity of particular scientific positions. If supporters of MNT were to argue that the government should overrule the judgement of the scientific community about what approach to radical nanotechnology is most likely to work on what are essentially ideological grounds, then I’d suggest they recall the tragic and unedifying history of similar interventions in the past. Biology in the Soviet Union was set back for a generation by Lysenko, who, unable to persuade his colleagues of the validity of his theory of genetics, appealed directly to Stalin. Such perversions aren’t restricted to totalitarian states; Edward Teller used his high level political connections to impose his vision of the x-ray laser on the USA’s defense research establishment, in the face of almost universal scepticism from other physicists. The physicists were right, and the program was abandoned, but not before the waste of many billions of dollars.

    But there’s a more immediate criticism of the theory that the NNI has been highjacked by nanopants. This is that it’s not right, even from the point of view of supporters of Drexler. The muddle and inconsistency comes across most clearly on the Center for Responsible Nanotechnology’s
    blog. While this entry strongly endorses the New Atlantis line, this entry only a few weeks earlier expresses the opinion that the most likely route to radical nanotechnology will come through wet, soft and biomimetic approaches. Of course, I agree with this (though my vision of what radical nanotechnology will look like is very different from that of supporters of MNT); it is the position I take in my book Soft Machines; it is also, of course, an approach recommended by Drexler himself. Looking across at the USA, I see some great and innovative science being done along these lines. Just look at the work of Ned Seeman, Chad Mirkin, Angela Belcher or Carlo Montemagno, to take four examples that come immediately to mind. Who is funding this kind of work? It certainly isn’t the Foresight Institute – no, it’s all those government agencies that make up the much castigated National Nanotechnology Initiative.

    Of course, supporters of MNT will say that, although this work may be moving in the direction that they think will lead to MNT, it isn’t been done with that goal explicitly stated. To this, I would simply ask whether it isn’t a tiny bit arrogant of the MNT visionaries to think that they are in a better position to predict the outcome of these lines of inquiry than the people who are actually doing the research.

    Whenever science funding is allocated, there is a real tension between the short-term and the long-term, and this is a legitimate bone of contention between politicians and legislators, who want to see immediate results in terms of money and jobs for the people they represent, and scientists and technologists with longer term goals. If MNT supporters were simply to argue that the emphasis of the NNI should be moved away from incremental applications towards longer term, more speculative research, then they’d find a lot of common cause with many nanoscientists. But it doesn’t do anyone any good to confuse these truly difficult issues with elaborate conspiracy theories.

    Politics in the UK

    Some readers may have noticed that we are in the middle of an election campaign here in the UK. Unsurprisingly, science and technology have barely been mentioned at all by any of the parties, and I don’t suppose many people will be basing their voting decisions on science policy. It’s nonetheless worth commenting on the parties’ plans for science and technology.

    I discussed the Labour Party’s plans for science for the next three years here – this foresees significant real-terms increases in science funding. The Conservative Party has promised to “at least match the current administration’s spending on science, innovation and R&D”. However, the Conservative’s spending plans are predicated on finding ��35 billion in “efficiency savings”, of which ��500 million is going to come from reforming the Department of Trade and Industry’s business support programmes. I believe it is under this heading that the ��200 million support for nanotechnology discussed here comes from, so I think the status of these programmes in a Conservative administration would be far from assured. The Liberal Democrats take a simpler view of the DTI – they just plan to abolish it, and move science to the Department for Education.

    So, on fundamental science support, there seems to be a remarkable degree of consensus, with no-one seeking to roll back the substantial increases in science spending that the Labour Party has delivered. The arguments really are on the margins, about the role of government in promoting applied and near-market research in collaboration with industry. I have many very serious misgivings about the way in which the DTI has handled its support for micro- and nano- technology. In principle, though, I do think it is essential that the UK government does provide such support to businesses, if only because all other governments around the world (including, indeed perhaps especially, the USA) practise exactly this sort of interventionist policy.

    Paint-on lasers and land-mine detection

    One of the many interesting features of semiconducting polymers is that they can be made to lase. By creating a population of excited electronic states, a situation can be achieved whereby light is amplified by the process of stimulated emission, giving rise to an intense beam of coherent light. Because semiconducting polymers can be laid down in a thin film from a simple solution, it’s tempting to dream of lasers that are fabricated by simple and cheap processes, like printing, or are simply painted on to a surface. The problem with this is that, so far (and as far as I know), the necessary population of excited states has only been achieved by illuminating the material with another laser. This optical pumping, as it is called, is obviously less useful than the situation where the laser can be pumped electrically, as is the case in the kind of inorganic semiconductor lasers that are now everyday items in CD and DVD players. But a paper in this week’s Nature (abstract free, subscription required for full article) demonstrates another neat use for lasing action in semiconducting polymers – as an ultrasensitive detector for explosives. See also this press release.

    The device relies on the fact that lasing is a highly non-linear effect; if an optically-pumped polymer laser is exposed to a material which influences only a few molecules at its surface, this can still kill the lasing action entirely. The molecule that is being used in this work, done at MIT by Timothy Swager’s group, is particularly sensitive to the explosive TNT. This device can work as a sensor that would be sensitive enough (and this needs to be in the parts per billion range) to detect the tiny traces of TNT vapour that a buried land-mine would emit.

    This work, rather unsurprisingly, is supported by MIT’s Institute for Soldier Nanotechnologies. The development of these ultrasensitive sensors for the detection of chemicals in the environment forms a big part of the research effort in evolutionary nanotechnology. On the science side, this is driven by the fact that detecting the effects of molecules interacting with surfaces is intrinsically a lot easier in systems with nanoscaled components, simply because the surface in a nanostructured device has a great deal more influence on its properties than it would in a bulk material. On the demand side, the needs of defense and homeland security are, now more than ever, setting the research agenda in the USA.