Feynman, Waldo and the Wickedest Man in the World

It’s been more than fifty years since Richard Feynman delivered his lecture “Plenty of Room at the Bottom”, regarded by many as the founding vision statement of nanotechnology. That foundational status has been questioned, most notably by Chris Tuomey in his article Apostolic Succession (PDF). In another line of attack, Colin Milburn, in his book Nanovision, argues against the idea that the ideas of nanotechnology emerged from Feynman’s lecture as the original products of his genius; instead, according to Milburn, Feynman articulated and developed a set of ideas that were already current in science fiction. And, as I briefly mentioned in my report from September’s SNET meeting, according to Milburn, the intellectual milieu from which these ideas emerged had some very weird aspects.

Milburn describes some of science fiction antecedents of the ideas in “Plenty of Room” in his book. Perhaps the most direct link can be traced for Feynman’s notion of remote control robot hands, which make smaller sets of hands, which can be used to be made yet smaller ones, and so on. The immediate source of this idea is Robert Heinlein’s 1942 novella “Waldo”, in which the eponymous hero devises just such an arrangement to carry out surgery on the sub-cellular level. There’s no evidence that Feynman had read “Waldo” himself, but Feynman’s friend Al Hibbs certainly had. Hibbs worked at Caltech’s Jet Propulsion Laboratory, and he had been so taken by Heinlein’s idea of robot hands as a tool for space exploration that he wrote up a patent application for it (dated 8 February 1958). Ed Regis, in his book “Nano”, tells the story, and makes the connection to Feynman, quoting Hibbs as follows: “It was in this period, December 1958 to January 1959, that I talked it over with Feynman. Our conversations went beyond my “remote manipulator” into the notion of making things smaller … I suggested a miniature surgeon robot…. He was delighted with the notion.”

“Waldo” is set in a near future, where nuclear derived energy is abundant, and people and goods fly around in vessels powered by energy beams. The protagonist, Waldo Jones, is a severely disabled mechanical genius (“Fat, ugly and hopelessly crippled” as it says on the back of my 1970 paperback edition) who lives permanently in an orbiting satellite, sustained by the technologies he’s developed to overcome his bodily weaknesses. The most effective of these technologies are the remote controlled robot arms, named “waldos” after their inventor. The plot revolves around a mysterious breakdown of the energy transmission system, which Waldo Jones solves, assisted by the sub-cellular surgery he carries out with his miniaturised waldos.

The novella is dressed up in the apparatus of hard science fiction – long didactic digressions, complete with plausible-sounding technical details and references to the most up-to-date science, creating the impression of that its predictions of future technologies are based on science. But, to my surprise, the plot revolves around, not science, but magic. The fault in the flying machines is diagnosed by a back-country witch-doctor, and involves a failure of will by the operators (itself a consequence of the amount of energy being beamed about the world). And the fault can itself be fixed by an act of will, by which energy in a parallel, shadow universe can be directed into our own world. Waldo Jones himself learns how to access the energy of this unseen world, and in this way overcomes his disabilities and fulfills his full potential as a brain surgeon, dancer and all round, truly human genius.

Heinlein’s background as a radio engineer explains where his science came from, but what was the source of this magical thinking? The answer seems to be the strange figure of Jack Parsons. Parsons was a self-taught rocket scientist, one of the founders of the Jet Propulsion Laboratory and a key figure in the early days of the USA’s rocket program (his story is told in George Pendle’s biography “Strange Angel”). But he was also deeply interested in magic, and was a devotee of the English occultist Aleister Crowley. Crowley, aka The Great Beast, was notorious for his transgressive interest in ritual magic – particularly sexual magic – and attracted the title “the wickedest man in the world” from the English newspapers in between the wars. He had founded a religion of his own, whose organisation, the Ordo Templi Orientis, promulgated his creed, summarised as “Do what thou wilt shall be the whole of the Law”. Parsons was inititated into the Hollywood branch of the OTO in 1941; in 1942 Parsons, now a leading figure in the OTO, moved the whole commune into a large house in Pasadena, where they lived according to Crowley’s transgressive law. Also in 1942, Parsons met Robert Heinlein at the Los Angeles Science Fiction Society, and the two men became good friends. Waldo was published that year.

The subsequent history of Jack Parsons was colourful, but deeply unhappy. He became close to another member of the circle of LA science fiction writers, L. Ron Hubbard, who moved into the Pasadena house in 1945 with catastrophic effects for Parsons. In 1952, Parsons died in a mysterious explosives accident in his basement. Hubbard, of course, went on to found a religion of his own, Scientology.

This is a fascinating story, but I’m not sure what it signifies, if anything. Colin Milburn wonders whether “it is tempting to see nanotech’s aura of the magical, the impossible made real, as carried through the Parsons-Heinlein-Hibbs-Feynman genealogy”. Sober scientists working in nanotechnology would argue that their work is as far away from magical thinking as one can get. But amongst those groups on the fringes of the science that cheer nanotechnology on – the singularitarians and transhumanists – I’m not sure that magic is so distant. Universal abundance through nanotechnology, universal wisdom through artificial intelligence, and immortal life through the defeat of ageing – these sound very much like the traditional aims of magic – these are parallels that Dale Carrico has repeatedly drawn attention to. And in place of Crowley’s Ordo Templi Orientis (and no doubt without some of the OTO’s more colourful practises), transhumanists have their very own Order of Cosmic Engineers, to “engineer ‘magic’ into a universe presently devoid of God(s).”

Computing with molecules

This is a pre-edited version of an essay that was first published in April 2009 issue of Nature Nanotechnology – Nature Nanotechnology 4, 207 (2009) (subscription required for full online text).

The association of nanotechnology with electronics and computers is a long and deep one, so it’s not surprising that a central part of the vision of nanotechnology has been the idea of computers whose basic elements are individual molecules. The individual transistors of conventional integrated circuits are at the nanoscale already, of course, but they’re made top-down by carving them out from layer-cakes of semiconductors, metals and insulators – what if one could make the transistors by joining together individual molecules? This idea – of molecular electronics – is an old one, which actually predates the widespread use of the term nanotechnology. As described in an excellent history of the field by Hyungsub Choi and Cyrus Mody (The Long History of Molecular Electronics, PDF) its origin can be securely dated at least as early as 1973; since then it has had a colourful history of big promises, together with waves of enthusiasm and disillusionment.

Molecular electronics, though, is not the only way of using molecules to compute, as biology shows us. In an influential 1995 review, Protein molecules as computational elements in living cells (PDF), Dennis Bray pointed out that the fundamental purpose of many proteins in cells seems to be more to process information than to effect chemical transformations or make materials. Mechanisms such as allostery permit individual protein molecules to behave as individual logic gates; one or more regulatory molecules bind to the protein, and thereby turn on or off its ability to catalyse a reaction. If the product of that reaction itself regulates the activity of another protein, one can think of the result as an operation which converts an input signal conveyed by one molecule into an output conveyed by another, and by linking together many such reactions into a network one builds a chemical “circuit” which in effect can carry out computational tasks of more or less complexity. The classical example of such a network is the one underlying the ability of bacteria to swim towards food or away from toxins. In bacterial chemotaxis, information from sensors about many different chemical species in the environment is integrated to produce the signals that control a bacterium’s motors, resulting in apparently purposeful behaviour.

The broader notion that much cellular activity can be thought of in terms of the processing of information by the complex networks involved in gene regulation and cell signalling has had a far-reaching impact in biology. The unravelling of these networks is the major concern of systems biology, while synthetic biology seeks to re-engineer them to make desired products. The analogies between electronics and systems thinking and biological systems are made very explicit in much writing about synthetic biology, with its discussion of molecular network diagrams, engineered gene circuits and interchangeable modules.

And yet, this alternative view of molecular computing has yet to make much impact in nanotechnology. Molecular logic gates have been demonstrated in a number of organic compounds, for example by the Belfast based chemist Prasanna de Silva; here ingenious molecular design can allow several input signals, represented by the presence or absence of different ions or other species, to be logically combined to produce outputs represented by optical fluorescence signals at different wavelengths. In one approach, a molecule consists of a fluorescent group is attached by a spacer unit to receptor groups; in the absence of bound species at the receptors, electron transfer from the receptor group to the fluorophore suppresses its fluorescence. Other approaches employ molecular shuttles – rotaxanes – in which physically linked but mobile molecular components move to different positions in response to changes in their chemical environment. These molecular engineering approaches are leading to sensors of increasing sophistication. But because the output is in the form of fluorescence, rather than a molecule, it is not possible to link many such logic gates into a network.

At the moment, it seems the most likely avenue for developing complex networks for information processing based on synthetic components will use nucleic acids, particularly DNA. Like other branches of the field of DNA nanotechnology, progress here is being driven by the growing ease and cheapness with which it is possible to synthesise specified sequences of DNA, together with the relative tractability of design and modelling of molecular interactions based on the base pair interaction. One demonstration from Erik Winfree’s group at Caltech uses this base pair interaction to design logic gates based on DNA molecules. These accept inputs in the form of short RNA strands, and output DNA strands according to the logical operations OR, AND or NOT. The output strands can themselves be used as inputs for further logical operations, and it is this that would make it possible in principle to develop complex information processing networks.

What should we think about using molecular computing for? The molecular electronics approach has a very definite target; to complement or replace conventional CMOS-based electronics, to ensure the continuation of Moore’s law beyond the point when physical limitations prevent any further miniaturisation of silicon-based. The inclusion of molecular electronics in the latest International Technology Roadmap for Semiconductors indicates the seriousness of this challenge, and molecular electronics and other related approaches, such as graphene-based electronics, will undoubtedly continue to be enthusiastically pursued. But these are probably not appropriate goals for molecular computing with chemical inputs and outputs. Instead, the uses of these technologies are likely to be driven by their most compelling unique selling point – the ability to interface directly with the biochemical processes of the cell. It’s been suggested that such molecular logic could be used to control the actions of a sophisticated drug device, for example. An even more powerful possibility is suggested by another paper (abstract, subscription required for full paper) from Christina Smolke (now at Stanford). In this work an RNA construct controls the in-vivo expression of a particular gene in response to this kind of molecular logic. This suggests the creation of what could be called molecular cyborgs – the result of a direct merging of synthetic molecular logic with the cell’s own control systems.

Society for the study of nanoscience and emerging technologies

Last week I spent a couple of days in Darmstadt, at the second meeting of the Society for the Study of Nanoscience and Emerging Technologies (S.NET). This is a relatively informal group of scholars in the field of Science and Technology Studies from Europe, the USA and some other countries like Brazil and India, coming together from disciplines like philosophy, political science, law, innovation studies and sociology.

Arie Rip (president of the society, and to many the doyen of European science and technology studies) kicked things off with the assertion that nanotechnology is, above all, a socio-political project, and the warning that this object of study was in the process of disappearing (a theme that recurred throughout the conference). Not to be worried by this prospect, Arie observed that their society could keep its acronym and rename itself the Society for the Study of Newly Emerging Technologies.

The first plenary lecture was from the French philosopher Bernard Stiegler, on Knowledge, Industry and Distrust at the Time of Hyperminiaturisation. I have to say I found this hard going; the presentation was dense with technical terms and delivered by reading a prepared text. But I’m wiser about it now than I was, thanks to a very clear and patient explanation from Colin Milburn over dinner that evening, who filled us in with the necessary background about Derrida’s intepretation of Plato’s pharmakon, and Simondon’s notion of disindividuation.

One highlight for me was a talk by Michael Bennett about changes in the intellectual property regime in the USA during the 1980’s and 1990’s. He made a really convincing case that the growth of nanotechnology went in parallel with a series of legal and administrative changes that amounted to a substantial intensification of the intellectual property regime in the USA. While some people think that developments in law struggle to keep up with science and technology, he argued instead that law bookends the development of technoscience, both shaping the emergence of the science and dominating the way it is applied. This growing influence, though, doesn’t help innovation. Recent trends, such as the tendency of research universities to patent early with very wide claims, and to seek exclusive licenses, aren’t helpful; we’re seeing the creation of “patent thickets”, such as the one that surrounds carbon nanotubes, which substantially add to the cost and increase uncertainty for those trying to commercialise technologies in this area. And there is evidence of an “anti-commons” effect, where other scientists are inhibited from working on systems when patents have been issued.

A round-table discussion on the influence of Feynman’s lecture “Plenty of Room at the Bottom” on the emergence of nanotechnology as a field produced some suprises too. I’m already familiar with Chris Tuomey’s careful demonstration that Plenty of Room’s status as the foundation of nanotechnology was largely granted retrospectively (see, for example, his article Apostolic Succession, PDF); Cyrus Mody‘s account of the influence it had on the then emerging field of microelectronics adds some shade to this picture. Colin Milburn made some comments that put Feynman’s lecture into the cultural context of its time; particularly in the debt it owed to science fiction stories like Robert Heinlein’s “Waldo”. And, to my great surprise, he reminded us just how weird the milieu of post-war Pasadena was; the very odd figure of Jack Parsons helping to create the Jet Propulsion Laboratory while at the same time conducting a programme of magic inspired by Aleister Crowley and involving a young L. Ron Hubbard. At this point I felt I’d stumbled out of an interesting discussion of a by-way of the history of science into the plot of an unfinished Thomas Pynchon novel.

The philosopher Andrew Light talked about how deep disagreements and culture wars arise, and the distinction between intrinsic and extrinsic objections to new technologies. This was an interesting analysis, though I didn’t entirely agree with his prescriptions, and a number of other participants were showing some some unease at the idea that the role of philosophers is to create a positive environment for innovation. My own talk was a bit of a retrospective, with the title “What has nanotechnology taught us about contemporary technoscience?” The organisers will be trying to persuade me to write this up for the proceedings volume, so I’ll say no more about this for the moment.