Does transhumanism matter?

The political scientist Francis Fukuyama once identified transhumanism as the “the world’s most dangerous idea”. Perhaps a handful of bioconservatives share this view, but I suspect few others do. After all, transhumanism is hardly part of the mainstream. It has a few high profile spokesmen, and it has its vociferous adherents on the internet, but that’s not unusual. The wealth, prominence, and technical credibility of some of its sympathisers – drawn from the elite of Silicon Valley – does, though, differentiate transhumanism from the general run of fringe movements. My own criticisms of transhumanism have focused on the technical shortcomings of some of the key elements of the belief package – especially molecular nanotechnology, and most recently the idea of mind uploading. I fear that my critique hasn’t achieved much purchase. To many observers with some sort of scientific background, even those who share some of my scepticism of the specifics, the worst one might say about transhumanism is that it is mostly harmless, perhaps over-exuberant in its claims and ambitions, but beneficial in that it promotes a positive image of science and technology.

But there is another critique of transhumanism, which emphasises not the distance between transhumanism’s claims and what is technologically plausible, as I have done, but the continuity between the way transhumanists talk about technology and the future and the way these issues are talked about in the mainstream. In this view, transhumanism matters, not so much for its strange ideological roots and shaky technical foundations, but because it illuminates some much more widely held, but pathological, beliefs about technology. The most persistent proponent of this critique is Dale Carrico, whose arguments are summarised in a recent article, Futurological Discourses and Posthuman Terrains (PDF). Although Carrico looks at transhumanism from a different perspective from me, the perspective of a rhetorician rather than an experimental scientist, I find his critique deserving of serious attention. For Carrico, transhumanism distorts the way we think about technology, it contaminates the way we consider possible futures, and rather than being radical it is actually profoundly conservative in the way in which it buttresses existing power structures.

Carrico’s starting point is to emphasise that there is no such thing as technology, and as such it makes no sense to talk about whether one is “for” or “against” technology. On this point, he is surely correct; as I’ve frequently written before, technology is not a single thing that is advancing at a single rate. There are many technologies, some are advancing fast, some are neglected and stagnating, some are going backwards. Nor does it make sense to say that technology is by itself good or bad; of the many technologies that exist or are possible, some are useful, some not. Or to be more precise, some technologies may be useful to some groups of people, they may be unhelpful to other groups of people, or their potential to be helpful to some people may not be realised because of the political and social circumstances we find ourselves in. Continue reading “Does transhumanism matter?”

Does radical innovation best get done by big firms or little ones?

A recent blogpost by the economist Diane Coyle quoted JK Galbraith as saying in 1952: “The modern industry of a few large firms is an excellent instrument for inducing technical change. It is admirably equipped for financing technical development and for putting it into use. The competition of the competitive world, by contrast, almost completely precludes technical development.” Coyle describes this as “complete nonsense”“ big firms tend to do incremental innovation, while radical innovation tends to come from small entrants.” This is certainly conventional wisdom now – but it needs to be challenged.

As a point of historical fact, what Galbraith wrote in 1952 was correct – the great, world-changing innovations of the postwar years were indeed the products, not of lone entrepreneurs, but of the giant R&D departments of big corporations. What is true is that in recent years we’ve seen radical innovations in IT which have arisen from small entrants, of which Google’s search algorithm is the best known example. But we must remember two things. Digital innovations like these don’t exist in isolation – they only have an impact because they can operate on a technological substrate which isn’t digital, but physical. The fast, small and powerful computers, the world-wide communications infrastructure that digital innovations rely on were developed, not in small start-ups, but in large, capital intensive firms. And many of the innovations we urgently need – in areas like affordable low carbon energy, grid-scale energy storage, and healthcare for ageing populations – will not be wholly digital in character. Technologies don’t all proceed at the same pace (as I discussed in an earlier post – Accelerating change or innovation stagnation). In focusing on the digital domain, in which small entrants can indeed achieve radical innovations (as well as some rather trivial ones), we’re in danger of failing to support the innovation in the material and biological domains, which needs the long-term, well-resourced development efforts that only big organisations can mobilise. The outcome will be a further slowing of economic growth in the developed world, as innovation slows down and productivity growth stalls.

So what were the innovations that the sluggish big corporations of the post-war world delivered? Jet aircraft, antibiotics, oral contraceptives, transistors, microprocessors, Unix, optical fibre communications and mobile phones are just a few examples. Continue reading “Does radical innovation best get done by big firms or little ones?”

Growth, technological innovation, and the British productivity crisis

The biggest current issue in the UK’s economic situation is the continuing slump in productivity. It’s this poor productivity performance that underlies slow or no real wage growth, and that also contributes to disappointing government revenues and consequent slow progress reducing the government deficit. Yet the causes of this poor productivity performance are barely discussed, let alone understood. In the long-term, productivity growth is associated with innovation and technological progress – have we stopped being able to innovate? The ONS has recently released a set of statistics which potentially throw some light on the issue. These estimates of total factor productivity – productivity controlled for inputs of labour and capital – make clear the seriousness of the problem.

Multifactor productivity, whole economy, ONS estimates.
Total factor productivity relative to 1994, whole economy, ONS estimates

Here are the figures for the whole economy. They show that, up to 2008, total factor productivity grew steadily at around 1% a year. Then it precipitously fell, losing more than a decade’s worth of growth, and it continues to fall. This means that each year since the financial crisis, on average we have had to work harder or put in more capital to achieve the same level of economic output. A simple-minded interpretation of this would be that, rather than seeing technological progress being reflected in economic growth, we’re going backwards, we’re technologically regressing, and the only economic growth we’re seeing is because we have a larger population working longer hours.

Of course, things are more complicated than this. Many different sectors contribute to the economy – in some, we see substantial innovation and technological progress, while in others the situation is not so good. It’s the overall shape of the economy, the balance between growing and stagnating sectors, that contributes to the whole picture. The ONS figures do begin to break down total factor productivity growth into different sectors, and this begins to give some real insight into what’s wrong with the UK’s economy and what needs to be done to right it. Before I come to those details, I need to say something more about what’s being estimated here.

Where does sustainable, long term economic growth come from? Continue reading “Growth, technological innovation, and the British productivity crisis”

Science, Politics, and the Haldane Principle

The UK government published a new Science and Innovation Strategy just before Christmas, in circumstances that have led to a certain amount of comment (see, for example, here and here). There’s a lot to be said about this strategy, but here I want to discuss just one aspect – the document’s extended references to the Haldane Principle. This principle is widely believed to define, in UK science policy, a certain separation between politics and science, taking detailed decisions about what science to fund out of the hands of politicians and entrusting them to experts in the Research Councils, at arms’ length from the government. The new strategy reaffirms an adherence to the Haldane Principle, but it does this in a way that will make some people worry that an attempt is being made to redefine it, to allow more direct intervention in science funding decisions by politicians in Whitehall. No-one doubts that the government of the day has, not just a right, but a duty, to set strategic directions and priorities for the science the government funds. What’s at issue are how to make the best decisions, underpinned by the best evidence, for what by definition are the uncertain outcomes of research.

The key point to recognize about the Haldane Principle is that it is – as the historian David Edgerton pointed out – an invented tradition. Continue reading “Science, Politics, and the Haldane Principle”

Responsible innovation and irresponsible stagnation

This long blogpost is based on a lecture I gave at UCL a couple of weeks ago, for which you can download the overheads here. It’s a bit of a rough cut but I wanted to write it down while it was fresh in my mind.

People talk about innovation now in two, contradictory, ways. The prevailing view is that innovation is accelerating. In everyday life, the speed with which our electronic gadgets become outdated seems to provide supporting evidence for this view, which, taken to the extreme, leads to the view of Kurzweil and his followers that we are approaching a technological singularity. Rapid technological change always brings losers as well as unanticipated and unwelcome consequences. The question then is whether it is possible to innovate in a way that minimises these downsides, in a way that’s responsible. But there’s another narrative about innovation that’s growing in traction, prompted by the dismally poor economic growth performance of the developed economies since the 2008 financial crisis. In this view – perhaps most cogently expressed by economic Tyler Cowen – slow economic growth is reflecting a slow-down in technological innovation – a Great Stagnation. A slow-down in the rate of technological change may reassure conservatives worried about the downsides of rapid innovation. But we need technological innovation to help us overcome our many problems, many of them caused in the first place by the unforeseen consequences of earlier waves of innovation. So our failure to innovate may itself be irresponsible.

What irresponsible innovation looks like

What could we mean by irresponsible innovation? We all have our abiding cultural image of a mad scientist in a dungeon laboratory recklessly pursuing some demonic experiment with a world-consuming outcome. In nanotechnology, the idea of grey goo undoubtedly plays into this archetype. What if a scientist were to succeed in making self-replicating nanobots, which on escaping the confines of the laboratory proceeded to consume the entire substance of the earth’s biosphere as they reproduced, ending human and all other life on earth for ever? I think we can all agree that this outcome would be not wholly desirable, and that its perpetrators might fairly be accused of irresponsibility. But we should also ask ourselves how likely such a scenario is. I think it is very unlikely in the coming decades, which leaves for me questions about whose purposes are served by this kind of existential risk discourse.

We should worry about the more immediate implications of genetic modification and synthetic biology, for example in their potential to make existing pathogens more dangerous, to recreate historical pathogenic strains, or even to create entirely new ones. Continue reading “Responsible innovation and irresponsible stagnation”

Lecture on responsible innovation and the irresponsibility of not innovating

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.

Your mind will not be uploaded

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Continue reading “Your mind will not be uploaded”

Transhumanism has never been modern

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

Transhumanism is an ideology, a movement, or a belief system, which predicts and looks forward to a future in which an increasing integration of technology with human beings leads to a qualititative, and positive, change in human nature. Continue reading “Transhumanism has never been modern”

Rebuilding the UK’s innovation economy

The UK’s innovation system is currently under-performing; the amount of resource devoted to private sector R&D has been too low compared to competitors for many years, and the situation shows no sign of improving. My last post discussed the changes in the UK economy that have led us to this situation, which contributes to the deep-seated problems of the UK economy of very poor productivity performance and persistent current account deficits. What can we do to improve things? Here I suggest three steps.

1. Stop making things worse.
Firstly, we should recognise the damage that has been done to the countries innovative capacity by the structural shortcomings of our economy and stop making things worse. R&D capacity – including private sector R&D – is a national asset, and we should try and correct the perverse incentives that lead to its destruction. Continue reading “Rebuilding the UK’s innovation economy”