Even more debate on transhumanism

Following on from my short e-book “Against Transhumanism: the delusion of technological transcendence” (available free for download: Against Transhumanism, v1.0, PDF 650 kB), I have a long interview on the Singularity Weblog available as a podcast or video – “Richard Jones on Against Transhumanism”.

To quote my interviewer, Nikola Danaylov, “During our 75 min discussion with Prof. Richard Jones we cover a variety of interesting topics such as: his general work in nanotechnology, his book and blog on the topic; whether technological progress is accelerating or not; transhumanism, Ray Kurzweil and technological determinism; physics, Platonism and Frank J. Tipler‘s claim that “the singularity is inevitable”; the strange ideological routes of transhumanism; Eric Drexler’s vision of nanotechnology as reducing the material world to software; the over-representation of physicists on both sides of the transhumanism and AI debate; mind uploading and the importance of molecules as the most fundamental units of biological processing; Aubrey de Grey‘s quest for indefinite life extension; the importance of ethics and politics…”

For an earlier round-up of other reactions to the e-book, see here.

Against Transhumanism – the e-book

Transhumanism: technically wrong, ideologically suspect, and damaging to the way we talk about technology…

As an experiment, I’ve brought together a number of the pieces I’ve written here and elsewhere about molecular nanotechnology, mind-uploading, and the origins and wider implications of transhumanism, to make, after some light editing, a 54-page e-book with the title “Against Transhumanism: the delusion of technological transcendence”.

It can be downloaded as a PDF here:
Against Transhumanism, v1.0 (PDF 7.1 MB).

On Singularities, mathematical and metaphorical

Transhumanists look forward to a technological singularity, which we should expect to take place on or around 2045, if Ray Kurzweil is to be relied on. The technological singularity is described as something akin to an event horizon, a date at which technological growth becomes so rapid that to look beyond it becomes quite unknowable to us mere cis-humans. In some versions this is correlated with the time when, due to the inexorable advance of Moore’s Law, machine intelligence surpasses human intelligence and goes into a recursive cycle of self-improvement.

The original idea of the technological singularity is usually credited to the science fiction writer Vernor Vinge, though earlier antecedents can be found, for example in the writing of the British Marxist scientist J.D. Bernal. Even amongst transhumanists and singularitarianists there are different views about what might be meant by the singularity, but I don’t want to explore those here. Instead, I note this – when we talk of the technological singularity we’re using a metaphor, a metaphor borrowed from mathematics and physics. It’s the Singularity as a metaphor that I want to probe in this post.

A real singularity happens in a mathematical function, where for some value of the argument the result of the function is undefined. Continue reading “On Singularities, mathematical and metaphorical”

Your mind will not be uploaded

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Continue reading “Your mind will not be uploaded”

Transhumanism has never been modern

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

Transhumanism is an ideology, a movement, or a belief system, which predicts and looks forward to a future in which an increasing integration of technology with human beings leads to a qualititative, and positive, change in human nature. Continue reading “Transhumanism has never been modern”

New Dawn Fades?

Before K. Eric Drexler devised and proselytised for his particular, visionary, version of nanotechnology, he was an enthusiast for space colonisation, closely associated with another, older, visionary for a that hypothetical technology – the Princeton physicist Gerard O’Neill. A recent book by historian Patrick McCray – The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future – follows this story, setting its origins in the context of its times, and argues that O’Neill and Drexler are archetypes of a distinctive type of actor at the interface between science and public policy – the “Visioneers” of the title. McCray’s visioneers are scientifically credentialed and frame their arguments in technical terms, but they stand at some distance from the science and engineering mainstream, and attract widespread, enthusiastic – and sometimes adulatory – support from broader mass movements, which sometimes take their ideas in directions that the visioneers themselves may not always endorse or welcome.

It’s an attractive and sympathetic book, with many insights about the driving forces which led people to construct these optimistic visions of the future. Continue reading “New Dawn Fades?”

Going soft on nano

An interview between me and the writer Eddie Germino has just been published on the transhumanist website/magazine H+, with the title Going Soft on Nanotech. In it I discuss what I mean by “Soft Machines”, and make some comments on the feasibility of some of Drexler’s proposals for radical nanotechnology. I also make some more general points about how I see the future of technology, and say something about the Transhumanist and Singularitarian movements.

Any visitors from H+ magazine wishing to find out more about my thoughts on K. Eric Drexler’s views on nanotechnology will find this recent post – Nanotechnology, K. Eric Drexler and me – a good starting point.

Nanotechnology, K. Eric Drexler and me

Next week – on the 26th March – I’m participating in a discussion event sponsored by the thinktank Policy Exchange at NESTA, in London. Also on the panel is K. Eric Drexler, the originator of the idea of nanotechnology in its most expansive form, as an emerging technology which, when fully developed, will have truly transformational effects. It will, in this view, allow us to make pretty much any material, device or artefact for little or no cost, we will be able to extend human lifespans almost indefinitely using cell-by-cell surgery, and we will create computers so powerful that they will host artificial intelligences greatly superior to those of humans. Drexler has a new book coming out in May – Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization. I think this view overstates the potential of the technology, and (it shocks me to realise), I have been arguing this in some technical detail for nearly ten years. Although I have met Drexler, and corresponded with him, this is the first time I will have shared a platform with him. To mark this occasion I have gone through my blog’s archives to make this anthology of my writings about Drexler’s vision of nanotechnology and my arguments with some of its adherents (who should not, of course, automatically be assumed to speak for Drexler himself). Continue reading “Nanotechnology, K. Eric Drexler and me”

Feynman, Drexler, and the National Nanotechnology Initiative

It’s fifty years since Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom”, and this has been the signal for a number of articles reflecting on its significance. This lecture has achieved mythic importance in discussions of nanotechnology; to many, it is nothing less than the foundation of the field. This myth has been critically examined by Chris Tuomey (see this earlier post), who finds that the significance of the lecture is something that’s been attached retrospectively, rather than being apparent as serious efforts in nanotechnology got underway.

There’s another narrative, though, that is popular with followers of Eric Drexler. According to this story, Feynman laid out in his lecture a coherent vision of a radical new technology; Drexler popularised this vision and gave it the name “nanotechnology”. Then, inspired by Drexler’s vision, the US government launched the National Nanotechnology Initiative. This was then hijacked by chemists and materials scientists, whose work had nothing to do with the radical vision. In this way, funding which had been obtained on the basis of the expansive promises of “molecular manufacturing”, the Feynman vision as popularized by Drexler, has been used to research useful but essentially mundane products like stain resistant trousers and germicidal washing machines. To add insult to injury, the material scientists who had so successfully hijacked the funds then went on to belittle and ridicule Drexler and his theories. A recent article in the Wall Street Journal – “Feynman and the Futurists” – by Adam Keiper, is written from this standpoint, in a piece that Drexler himself has expressed satisfaction with on his own blog. I think this account is misleading at almost every point; the reality is both more complex and more interesting.

To begin with, Feynman’s lecture didn’t present a coherent vision at all; instead it was an imaginative but disparate set of ideas linked only by the idea of control on a small scale. I discussed this in my article in the December issue of Nature Nanotechnology – Feynman’s unfinished business (subscription required), and for more details see this series of earlier posts on Soft Machines (Re-reading Feynman Part 1, Part 2, Part 3).

Of the ideas dealt with in “Plenty of Room”, some have already come to pass and have indeed proved economically and societally transformative. These include the idea of writing on very small scales, which underlies modern IT, and the idea of making layered materials with precisely controlled layer thicknesses on the atomic scale, which was realised in techniques like molecular beam epitaxy and CVD, whose results you see every time you use a white light emitting diode or a solid state laser of the kind your DVD contains. I think there were two ideas in the lecture that did contribute to the vision popularized by Drexler – the idea of “a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on”, and, linked to this, the idea of doing chemical synthesis by physical processes. The latter idea has been realised at proof of principle level by the idea of doing chemical reactions using a scanning tunnelling microscope; there’s been a lot of work in this direction since Don Eigler’s demonstration of STM control of single atoms, no doubt some of it funded by the much-maligned NNI, but so far I think it’s fair to say this approach has turned out so far to be more technically difficult and less useful (on foreseeable timescales) than people anticipated.

Strangely, the second part of the fable, which talks about Drexler popularising the Feynman vision, I think actually underestimates the originality of Drexler’s own contribution. The arguments that Drexler made in support of his radical vision of nanotechnology drew extensively on biology, an area that Feynman had touched on only very superficially. What’s striking if one re-reads Drexler’s original PNAS article and indeed Engines of Creation is how biologically inspired the vision is – the models he looks to are the protein and nucleic acid based machines of cell biology, like the ribosome. In Drexler’s writing now (see, for example, this recent entry on his blog), this biological inspiration is very much to the fore; he’s looking to the DNA-based nanotechnology of Ned Seeman, Paul Rothemund and others as the exemplar of the way forward to fully functional, atomic scale machines and devices. This work is building on the self-assembly paradigm that has been such a big part of academic work in nanotechnology around the world.

There’s an important missing link between the biological inspiration of ribosomes and molecular motors and the vision of “tiny factories”- the scaled down mechanical engineering familiar from the simulations of atom-based cogs and gears from Drexler and his followers. What wasn’t fully recognised until after Drexler’s original work, was that the fundamental operating principles of biological machines are quite different from the rules that govern macroscopic machines, simply because the way physics works in water at the nanoscale is quite different to the way it works in our familiar macroworld. I’ve argued at length on this blog, in my book “Soft Machines”, and elsewhere (see, for example, “Right and Wrong Lessons from Biology”) that this means the lessons one should draw from biological machines should be rather different to the ones Drexler originally drew.

There is one final point that’s worth making. From the perspective of Washington-based writers like Kepier, one can understand that there is a focus on the interactions between academic scientists and business people in the USA, Drexler and his followers, and the machinations of the US Congress. But, from the point of view of the wider world, this is a rather parochial perspective. I’d estimate that somewhere between a quarter and a third of the nanotechnology in the world is being done in the USA. Perhaps for the first time in recent years a major new technology is largely being developed outside the USA, in Europe to some extent, but with an unprecedented leading role being taken in places like China, Korea and Japan. In these places the “nanotech schism” that seems so important in the USA simply isn’t relevant; people are just pressing on to where the technology leads them.

Happy New Year

Here are a couple of nice nano-images for the New Year. The first depicts a nanoscale metal-oxide donut, whose synthesis is reported in a paper (abstract, subscription required for full article) in this week’s Science Magazine. The paper, whose first author is Haralampos Miras, comes from the group of Lee Cronin at the University of Glasgow. The object is made by templated self-assembly of molybdenum oxide units; the interesting feature here is that the cluster which forms the template for the ring – the “hole” around which the donut forms – forms as a precursor during the process before being ejected from the ring once it is formed.

A molybdenum oxide nanowheel templated on a transient cluster.  From Miras et al, Science 327 p 72 (2010).
A molybdenum oxide nanowheel templated on a transient cluster. From Miras et al, Science 327 p 72 (2010).

The second image depicts the stages in reconstructing a high resolution electron micrograph of a self-assembled tetrahedron made from DNA. In an earlier blog post I described how Russell Goodman, a grad student in the group of Andrew Turberfield at Oxford, was able to make rigid tetrahedra of DNA less than 10 nm in size. Now, in collaboration with Takayuki Kato and Keiichi Namba group at Osaka University, they have been able to obtain remarkable electron micrographs of these structures. The work was published last summer in an article in Nano Letters (subscription required). The figure shows, from left to right, the predicted structure, a raw micrograph obtained from cryo-TEM (transmission electron microscopy on frozen sections), a micrograph processed to enhance its contrast, and two three dimensional image reconstructions obtained from a large number of such images. The sharpest image, on the right, is at a 12 Å resolution, and it is believed that this is the smallest object, natural or artificial, that has been imaged using cryo-TEM at this resolution, which is good enough to distinguish between the major and minor grooves of the DNA helices that form the struts of the tetrahedron.

Cryo-TEM reconstruction of DNA tetrahedron
Cryo-TEM reconstruction of DNA tetrahedron, from Kato et al., Nano Letters, 9, p2747 (2009)

A happy New Year to all readers.