When buckyballs go quantum

It’s widely believed that, whereas the macroscopic world is governed by the intuitive and predictable rules of classical mechanics, the nanoscale world operates in an anarchy of quantum weirdness . I explained here why this view isn’t right; many changes in material behaviour at small scales have their origin in completely classical physics. But there’s another way of approaching this question, which is to ask what you would have to do to be able to see a nanoscale particle behaving in a quantum mechanical way. In fact, this needn’t be a thought experiment; Anton Zeilinger at the University of Vienna specialises in experiments about the foundations of quantum mechanics, and one of the themes of his research is in finding out how large an object he can persuade to behave quantum mechanically. In this context, the products of nanotechnology are large, not small, and among the biggest things he’s looked at are fullerene molecules – buckyballs. The results are described in this paper on the interference of C70 molecules.

What Zeilinger is looking for, as the signature of quantum mechanical behaviour, is interference. Quantum interference is that phenomenon which arises when the final position of a particle depends, not on the path it’s taken, but on all the paths it could have taken. Before the position of the particle is measured, the particle doesn’t exist at a single place and time; instead it exists in a quantum state which expresses all the places at which it could potentially be. But it isn’t just measurement which forces the particle (to anthropomorphise) to make up its mind where it is; if it collides with another particle or interacts with some other kind of atom, then this leads to the phenomenon known as decoherence, by which the quantum weirdness is lost and the particle behaves like a classical object. To avoid decoherence, and see quantum behaviour, Zeilinger’s group had to use diffuse beams of particles in a high vacuum environment. How good a vacuum do they need? By adding gas back into the vacuum chamber, they can systematically observe the quantum interference effect being washed out by collisions. The pressures at which the quantum effects vanish are around one billionth of atmospheric pressure. Now we can see why nanoscale objects like bucky-balls normally behave like classical objects, not quantum mechanical ones. The constant collisions with surrounding molecules completely wash out the quantum effects.

What, then, of nanoscale objects like quantum dots, whose special properties do result from quantum size effects? What’s quantum mechanical about a quantum dot isn’t the dot itself, it’s the electrons inside it. Actually, electrons always behave in a quantum mechanical way (explaining why this is so is a major part of solid state physics), but the size of the quantum dot affects the quantum mechanical states that the electrons can take up. The nanoscale particle that is the quantum dot itself, in spite of its name, remains resolutely classical in its behaviour.

Nanojury UK – the first week

A citizens jury on nanotechnology, sponsored by the IRC in Nanotechnology at the University of Cambridge, Greenpeace, and The Guardian newspaper, has got under way in earnest this week. I wrote here about its launch.

The jury is taking place in Halifax, a large industrial town in West Yorkshire. Names chosen at random from the electoral rolls were invited to apply to take part, and about 20 names from those who so applied were selected in a way that gives a group whose diversity is representative of their community. The jurors sign up for 20 two and a half hour evening sessions – two a week for ten weeks – so it’s a big commitment. The first 10 sessions are on a topic that the jurors themselves choose, and the remaining 10 sessions are about nanotechnology. Having spent five weeks talking about youth crime, they are working well together as a group and they understand the process pretty well.

Wednesday evening was spent in a general discussion about technologies and their impacts, both positive and negative, together with a very brief, scene-setting introduction to nanotechnology. The first proper witness session was held last night, on the theme of nanotechnology in medicine. The witness was Beatrice Leigh. Bea was formerly Head of New Technology for the drug company GlaxoSmithKline; she now runs her own (somewhat smaller) drug discovery company. I thought Bea did a great job, giving a very clear picture of why nano will be important in the pharmaceutical and biomedical industries (and, on the way, not being shy about the current shortcomings and difficulties of big pharma). After her half-hour long statement, the jurors spent some time by themselves formulating what they felt were the key questions, and then Bea and I did our best to answer them. This part of the evening provided clear proof that you don’t need expert knowledge to be able to ask penetrating questions.

Next week the jurors will get to see a rather different take on nanotech – next witness is Jim Thomas of the ETC group.

Intelligent yoghurt by 2025

Yesterday’s edition of the Observer contained the bizarre claim that we’ll soon be able to enhance the intelligence of bacteria by using molecular electronics. This came in an interview with Ian Pearson, who is always described as the resident futurologist of the British telecoms company BT. The claim is so odd that I wondered whether it was a misunderstanding on the part of the journalist, but it seems clear enough in this direct quote from Pearson:

“Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’ “

This is the kind of thing that puts satirists out of business.

Re-reading Feynman – Part 3

As I discussed in part 1 of this series, Richard Feynman’s lecture “There’s plenty of room at the bottom” is universally regarded as a foundational document for nanotechnology. As people argue about what nanotechnology is and might become, and different groups claim Feynman’s posthumous support for their particular vision, it’s worth looking closely at what the lecture actually said. In part 2 of this series, I looked at the first half of Feynman’s lecture, dealing with writing information on a very small scale, microscopy with better than atomic resolution, and the miniaturisation of computers. In the second part of the lecture, Feynman moved on to discuss the possibilities, first, of making ultra-small machines and ultimately of arranging matter on an atomic level.

  • Small machines
  • Feynman enters this subject by speculating about how one might make miniaturised computers. Why, he asks, can’t we simply make them in the same way as we make big ones? (Recall that at the time he was writing, computers filled rooms). Why can’t we just shrink a machine shop: “Why can’t we drill holes, cut things, solder things, stamp things out, mold different shapes all at an infinitesimal level?”

    The first problem Feynman identifies is the issue of tolerance – a piece of mechanical engineering, like a car, only works because its parts can be machined to a certain tolerance, which he guesses to be around 0.4 thousandths of an inch (this seems plausible for a 50’s American gas guzzler but I suspect that crucial components in modern cars do better than this). He argues that the ultimate limit on tolerance must derive from the inevitable graininess of atoms, and from this deduces that one can shrink mechanical engineering by a factor of about 4000. This implies that a one-centimeter component can be shrunk to about 2.5 microns. Other problems that come with scale include the fact Van der Waals forces become important, so everything sticks to everything else, and that we can’t use heat engines, because heat diffuses away too quickly. On the other hand, lubrication might get easier for the same reason. So we’ll need to do some things differently on small scales: “There will be several problems of this nature that we will have to be ready to design for”

    How are we going to make these devices? Feynman leaves the question open, but he makes one suggestion, recalling the remote handling devices people build to handle radioactive materials, levers that remotely operate mechanical hands: “Now, I want to build much the same device—a master-slave system which operates electrically. But I want the slaves to be made especially carefully by modern large-scale machinists so that they are one-fourth the scale of the “hands” that you ordinarily maneuver. So you have a scheme by which you can do things at one- quarter scale anyway—the little servo motors with little hands play with little nuts and bolts; they drill little holes; they are four times smaller.” And then you use the littler hands to make hands that are even smaller, and so on, until you have a set of machine tools at 1/4000th scale. The need to refine the accuracy of your machines at each stage of miniaturisation makes this, as Feynman concedes, “a very long and very difficult program. Perhaps you can figure a better way than that to get down to small scale more rapidly.”

    Reading this with the unfair benefit of hindsight, two things strike me. We do now have mechanical devices that operate on the length scales Feynman is envisioning here, upwards of a few microns. These micro-electromechanical systems (MEMS) are commercialised for example, in the accelerometers that activate car airbags. For an example of a company active in this field, take a look at Crossbow Technology. But the methods by which these MEMS devices are made very different to the scheme Feynman had in mind; just as in the case of computer miniaturisation it’s the planar processes of photolithography and etching that allow one to get down to this level of miniaturisation in a single step.

    Returning to Feynman’s idea of the master-slave system in which you input a large motion, and output a much smaller one, we do now have available such a device which can effectively get us not just to the microscale, but to the nanoscale, in a single step. The principle this depends on – the use of piezoelectricity to convert a voltage into a tiny change in dimensions of a particular type of crystal – was well known in 1960, and the material that proves to do the job best – the ceramic lead zirconium titanate (PZT) – had been on the market since 1952. I don’t know when or where the idea of using this material to make controlled, nanoscale motions was first developed, but between 1969 and 1972 David Tabor, at the Cavendish Laboratory in Cambridge, was using PZT for sub-nanometer positional control in the surface forces apparatus which he developed with his students Winterton and Israelachvili. Most famously, PZT nano-actuators were the basis for the scanning tunneling microscope, invented in 1981 by the Nobel laureates Binnig and Rohrer, and the atomic force microscope invented a few years later. As we’ll see, it’s this technology that has allowed the realisation of Feynman’s vision of atom-by-atom control.

    Why would you want to make all these tiny machines? Characteristically, the dominant motive for Feynman seems to be for fun, but he throws out one momentous suggestion, attributed to a friend: “it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and “looks” around.” Thus the idea of the medical nanobot is launched, only a few years before achieving wide-screen fame in Fantastic Voyage.

  • Rearranging matter atom by atom
  • Here Feynman asks the ultimate question “What would happen if we could arrange the atoms one by one the way we want them?” The motivation for this is that we would be able to get materials with entirely new properties: “What would the properties of materials be if we could really arrange the atoms the way we want them? They would be very interesting to investigate theoretically. I can’t see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a small scale we will get an enormously greater range of possible properties that substances can have, and of different things that we can do.”

    We do now have some idea of the possibilities that such control would offer. The first, easiest problem that Feynman poses is: “What could we do with layered structures with just the right layers?” The development of molecular beam epitaxy and chemical vapour deposition has made this possible, and just as Feynman anticipated the results have been spectacular. In effect, controlling the structure of compound semiconductors on the nanoscale – making semiconductor heterostructures allows one to create new materials with exactly the electronic properties you want, to make, for example, light emitting diodes and lasers with characteristics that would be unavailable from simple materials. Alferov and Kroemer won the Nobel Prize in Physics in 2000 (with Jack Kilby) for their work on heterostructure lasers. This work is gaining even more commercial importance with the discovery of a way of making blue heterostructure LEDs and lasers by Nakamura, opening the way for using light emitting diodes as a highly energy efficient light-source. Meanwhile new generations of quantum dot and quantum well lasers find uses in the optical communication systems that underly the workings of the internet. You can see an example of the kind of thing that’s been done in a number of labs around the world in this post about work done at Sheffield by my colleague Maurice Skolnick.

    This kind of semiconductor nanotechnology still doesn’t quite achieve atomic precision, though. This is Feynman’s ultimate goal: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. “. On this scale, Feynman forsees entirely new possibilities; “We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc. “ Some of these ideas are already being realised; quantum dots (even though they are made with slightly less than atomic precision) display quantised energy levels deriving from their size, and the manipulation of spins in such quantised systems is at the heart of the ideas of spintronics and may provide a way of realising quantum computing (another field which Feynman was the first to anticipate). Feynman points out another advantage of making this with atomic precision: the ability to make exact reproductions of the things we make: “But if your machine is only 100 atoms high, you only have to get it correct to one-half of one percent to make sure the other machine is exactly the same size—namely, 100 atoms high! “

    Don Eigler, of IBM, demonstrated the possibility of single atom manipulation in 1990 with this famous image of the letters IBM picked out in xenon atoms. Given this capability, what can one usefully do with it? Feynman suggests that it might prove a different route to doing chemistry: “But it is interesting that it would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. Give the orders and the physicist synthesizes it. How? Put the atoms down where the chemist says, and so you make the substance. “ Progress towards this goal has been very slow, emphasising just how hard the Eigler experiments were. Philip Moriarty provided an excellent summary of what has been achieved in his correspondence with Chris Phoenix, available as a PDF here. Feynman himself anticipated that this wouldn’t be easy: “By the time I get my devices working, so that we can do it by physics, he will have figured out how to synthesize absolutely anything, so that this will really be useless.” Nonetheless, Feynman stresses the value of these developments for science: “The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed—a development which I think cannot be avoided. “

    Now we’ve gone back to the original source to see what Feynman actually said, in my final installment, I’ll assess what validity there is to the various competing claims to the endorsement of Feynman for particular visions of nanotechnology.

    A nanotechnology citizens jury in the UK

    Nanojury UK, a new experiment in public engagement in nanotechnology, got its public launch today with an article in the Guardian newspaper (see also the opinion piece in today’s Guardian by Mark Welland, the Director of the Cambridge Nanoscience Centre). The idea of a citizens’ jury is that a group of more or less randomly chosen people are presented with expert evidence on some controversial issue, and having weighed up the evidence present a conclusion. What’s interesting about this jury is the diversity of the bodies that have come together to make it happen; it’s sponsored jointly by the IRC in Nanotechnology at the University of Cambridge, Greenpeace, and The Guardian newspaper. The steering committee includes representatives from the NGOs ETC and Green Alliance, UK Government and Research Councils, the nanobusiness world and academia, in addition to the main sponsors.

    Readers of Soft Machines got an early tip-off about this project. I’m chair of the Science Advisory Panel, and my role is to make sure that we find a wide and balanced range of witnesses, with different points of view, to make sure the views the jury forms are informed by authoritative and credible sources of information. There’s been a commitment from the government representative who sits on the steering group, Adrian Butt, that the output from the jury will be considered by the Nanotechnology Issues Dialogue Group, which is the body the UK government established to coordinate its response to the Royal Society report on nanotechnology. Naturally, how seriously they take the output will depend on how robust they judge the process to have been.

    I’ve already found the business of getting the thing off the ground fascinating, not least in the way in which people with very different views about nanotechnology have been able to work constructively together. The process itself begins next week, and will involve 10 evenings over the summer, with the findings being released in September. I’ll be reporting here on my experience of the process as it unfolds; the Guardian has a Nanojury website here, which includes background material and discussion boards.

    Here’s the press release.

    The quantum bridge of asses

    A good way of assessing whether a writer knows what they are talking about when it comes to nanotechnology is to look at what they say about quantum mechanics. There’s a very widespread view that what makes the nanoscale different to the macroscale is that, whereas the macroscale is ruled by classical mechanics, the nanoscale is ruled by quantum mechanics. The reality, as usual, is more complicated than this. It’s true that there are some very interesting quantum size effects that can be exploited in things like quantum dots and semiconductor heterostructures. But then lots of interesting materials and devices on the nanoscale aren’t ruled by quantum mechanics at all; for anything to do with mechanical properties, for example, nanoscale size effects have quite classical origins, and with the exception of photosynthesis almost nothing in bionanotechnology has anything to do with quantum mechanics. Conversely, there are some very common macroscopic phenomena that simply can’t be explained except in terms of quantum mechanics – the behaviour of electrical conductors and semiconductors, and the origins of magnetic materials, come immediately to mind.

    Here’s a fairly typical example of misleading writing about quantum effects: The ���novel properties and functions��� are derived from ���quantum physics��� effects that sometimes occur at the nanoscale, that are very different from the physical forces and properties we experience in our daily lives, and they are what make nanotechnology different from other really small stuff like proteins and other molecules. This is from NanoSavvy Journalism, an article by Nathan Tinker. Despite its seal of approval from David Berube, this is very misleading, as we can see if we look at his list of applications of nanotechnology and ask which depend on size-dependent quantum effects.

  • Nanotechnology is used in a wide array of electronics, magnetics and optoelectronics…
  • …right so far; the use of things like semiconductor heterostructures to make quantum wells certainly does depend on exploiting qm…

  • biomedical devices and pharmaceuticals
  • …here we are talking about systems with high surface to area ratios, self-assembled structures and tailoring the interactions with biological macromolecules like proteins, all of which has nothing at all to do with qm…

  • cosmetics…
  • …if we are talking liposomes, again we’re looking at self-assembly. To explain the transparency of nanoscale titania for sunscreen, we need the rather difficult, but entirely classical, theory of Mie scattering.

    The list goes on, but I think the point is made. All sorts of interesting and potentially useful things happen at the nanoscale, only a fraction of which depend on quantum mechanics.

    On the opposition side, the argument about the importance of quantum mechanical effects is pressed into service as a reason for anxiety; since everyone knows that quantum mechanics is mysterious and unpredictable, it must also be dangerous. I’ve commented before on the misguided use of this argument by ETC; here’s the Green Party member of the European Parliament, Caroline Lucas, writing in the Guardian: The commercial value of nanotech stems from the simple fact that the laws of physics don’t apply at the molecular level. Quantum physics kicks in, meaning the properties of materials change. This idea of the nanoscale as a lawless frontier in which anything can happen is rather attractive, but unfortunately quite untrue.

    Of course, the great attraction of quantum mechanics is all the fascinating, and usually entirely irrelevant, metaphysics that surrounds it. This provides a trap for otherwise well-informed business people to fall into, exposing themselves to the serious danger of ridicule from TNTlog, (whose author, besides being a businessman, has had the unfair advantage of having a good physics education).

    I know that it’s scientists who are to blame for this mess. Macroscopic=classical, nanoscale=quantum is such a simple and clear formula that it’s tempting for scientists communicating with the media and the public to use it even when they know it is not strictly true. But I think it’s now time to be a bit more accurate about the realities of nanoscale physics, even if this brings in a bit more complexity.

    Re-reading Feynman – Part 2

    In part 1 of this series I talked about the growing importance of Richard Feynman’s famous lecture There’s plenty of room at the bottom as a foundational document for nanotechnology of all flavours, and hinted at the tensions that arise as different groups claim Feynman’s vision as an endorsement for their own particular views. Here I want to go back to Feynman’s own words to try and unpick exactly what Feynman’s vision was, and how it looks more than forty years on.

    Feynman’s lecture actually covers a number of different topics related to miniaturisation. We can break up the lecture into a number of themes:

  • Writing small
  • Feynman starts with the typically direct and compelling question “Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?” Simple arithmetic convinces us that this is possible in principle; using a pixel size of 8 nm gives us enough resolution. So how in practise can it be done? Reading such small writing is no problem, and would have been possible even with the electron microscopy techniques available in 1959. Writing on this scale is more challenging, and Feynman threw out some ideas about using focused electron and ion beams. Although Feynman didn’t mention it, the basic work to enable this was already in progress at the time he was speaking. Cambridge was one of the places at which the scanning electron microscope was being developed (history here), and only a year a two later the first steps were being made in using focused beams to make tiny structures. The young graduate student who worked on this was the same Alec Broers who (now enobled) recently attracted the wrath of Drexler. This was the beginning of the technique of electron-beam lithography, now the most widely used method of making nanoscale structures in industry and academia.

  • Better microscopes
  • Electron microscopes in 1959 couldn’t resolve features smaller than 1 nm. This is impressively small, but it was still not quite good enough to see individual atoms. Feynman knew that there were no fundamental reasons preventing the resolution of electron microscopes being improved by a factor of 100, and he identified the problem that needed to be overcome (the numerical aperture of the lenses). Feynman’s goal of obtaining sub-atomic resolution in electron microscopes has now been achieved, but for various rather interesting reasons this development has had less impact than he anticipated.

    Feynman, above all, saw microscopy with sub-atomic resolution as a direct way of solving the mysteries of biology. “It is very easy to answer many of these fundamental biological questions; you just look at the thing! You will see the order of bases in the [DNA] chain; you will see the structure of the microsome”. But although microscopes are 100 times better, we still can’t directly sequence DNA microscopically. It turns out that the practical resolution isn’t limited by the instrument, but by the characteristics of biological molecules – particularly their tendency to get damaged by electron beams. This situation hasn’t been materially altered by the remarkable and exciting discovery of a whole new class of microscopy techniques with the potential to achieve atomic resolution – the scanning probe techniques like scanning tunneling microscopy and atomic force microscopy. Meanwhile many of the problems of structural biology have been solved, not by microscopy, but by x-ray diffraction.

  • Miniaturising the computer
  • The natural reaction of anyone under forty reading this section is shock, and that’s a measure of how far we’ve come since 1959. Feynman writes “I do know that computing machines are very large; they fill rooms” … younger readers need to be reminded that the time when a computer wasn’t a box on a desktop or a slab on a laptop is within living memory. In discussing the problems of making a computer powerful enough to solve a difficult problem like recognizing a face, Feynman comments ” there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing”. Now our transistors are made of silicon, but more importantly they aren’t discrete elements that need to be soldered together, they are patterned on a single piece of silicon as part of a planar integrated circuit. It’s this move to this new kind of manufacturing, based on a combination of lithographic patterning, etching and depositing very thin layers, that has permitted the extraordinary progress in the miniaturization of computers

    Feynman asks “Why can’t we manufacture these small computers somewhat like we manufacture the big ones?” The question has been superseded, to some extent, by the discovery of this better way of doing things. This discovery was already in sight at the time Feynman was writing; the two crucial patents for integrated circuits were filed by Jack Kilby and Robert Noyce early in 1959, but their significance didn’t become apparent for a few more years. This has been so effective that Feynman’s miniaturization goal – “the circuits should be a few thousand angstroms across” – has already been met, not just in the laboratory, but in consumer goods costing a few hundred dollars apiece.

    So far, then, we can see that much of Feynman’s vision has actually been realised, though some things haven’t worked out the way he anticipated. In the next section of this series I’ll consider what he said about miniature machines and rearranging matter atom by atom. It’s here, of course, that the controversy over Feynman’s legacy becomes most pointed.

    Which bits of nanotechnology does ETC now oppose?

    Checking out the website of the anti-nanotechnology campaigning group ETC, I see that their position on nanotechnology seems to have subtly changed. A press release dated November 23 2004 says “In 2002, ETC called for a moratorium on the commercialisation of new nano-scale materials until laboratory protocols and regulatory regimes are in place that take into account the special characteristics of these materials, and until they are shown to be safe”. But currently their website calls for a rather different moratorium: “The ETC group believes that a moratorium should be placed on research involving molecular self-assembly and self-replication.”

    I wonder what they mean by this? If it is Drexlerian self-replicating nanobots they are talking about, then the nanobusiness and nanoscience communities will no doubt cheerfully agree with them. But the usual understanding of the term molecular self-assembly is that it refers to the propensity of natural and synthetic molecules like soaps, proteins and block copolymers to arrange themselves, under the influence of Brownian motion and programmed patterns of molecular stickiness, into well defined nanostructures. This is certainly an important theme in nanoscience and technology today – there’s a chapter devoted to the subject in Soft Machines. As a principle that’s extensively exploited in biology self-assembly exemplifies the powerful approach to nanotechnology that learns lessons from nature. But it’s difficult to see that it has any particularly sinister or dangerous overtones, and its use in technology isn’t at all novel. Every bar of soap or bottle of shampoo depends on self-assembly to give it its unique properties, and the thermoplastic elastomers and polyurethane foams that are used in the soles of many shoes and trainers actually have quite complex self-assembled nanostructures. So just what is ETC opposing here?

    The Rat-on-a-chip

    I’ve written a number of times about the way in which the debate about the impacts of nanotechnology has been highjacked by the single issue of nanoparticle toxicity, to the detriment of more serious and interesting longer term issues, both positive and negative. The flippant title of this post on the subject – Bad News for Lab Rats – conceals the fact that, while I don’t oppose animal experiments in principle, I’m actually a little uncomfortable about the idea that large numbers of animals should be sacrificed in badly thought out and possibly unnecessary toxicology experiments. So I was very encouraged to read this news feature in Nature (free summary, subscription required for full article) about progress in using microfluidic devices containing cell cultures for toxiological and drug testing. The article features work from Michael Shuler’s group at Cornell, and a company founded by Shuler’s colleague Gregory Baxter, Hurel Corp.

    Re-reading Feynman (part 1)

    Every movement has its founding texts; for nanotechnology there’s general agreement that Richard Feynman’s lecture There’s plenty of room at the bottom is where the subject started, at least as a concept. The lecture is more than forty years old, but I sense that its perceived significance has been growing in recent years. Not least of the reasons for this is that, as the rift between the mainstream of academic and commercial nano- science and technology and the supporters of Drexler has been growing, both sides, for different reasons, find it convenient to emphasis the foundational role of Richard Feynman. Drexler himself often refers to his vision of nanotechnology as the “Feynman vision”, thus explicitly claiming the endorsement of someone many regard as the greatest native-born American scientist of all time. For mainstream nanoscientists, on the other hand, increasing the prominence given to Feynman has the welcome side-effect of diminishing the influence of Drexler.

    Many such founding documents easily slip into the category of papers that are “much-cited, but seldom read”, particularly when they were published in obscure publications that aren’t archived on the web. Feynman’s lecture is easily available, so there’s no excuse for this fate befalling it now. Nonetheless, one doesn’t often read very much about what Feynman actually said. This is a pity, not because his predictions of the future were flawless, nor because he presented a coherent plan that nanotechnologists today should be trying to follow. Feynman was a brilliant theoretical physicist observing science and technology as it was in 1959. It’s fascinating, as we try to grope towards an understanding of where technology might lead us in the next forty years, to look back at these predictions and suggestions. Some of what he predicted has already happened, to an extent that probably would have astonished him at the time. In other cases, things haven’t turned out the way he thought they would. We’ve seen some spectacular breakthroughs that were completely unanticipated. Finally, Feynman suggested some directions that as yet have not happened, and whose feasibility isn’t yet established. In my next post in this series, I’ll use the luxury of hindsight to look in detail at Plenty of Room at the Bottom, to ask just how well Feynman’s predictions and hunches have stood the test of time.