On Singularities, mathematical and metaphorical

June 20th, 2015

Transhumanists look forward to a technological singularity, which we should expect to take place on or around 2045, if Ray Kurzweil is to be relied on. The technological singularity is described as something akin to an event horizon, a date at which technological growth becomes so rapid that to look beyond it becomes quite unknowable to us mere cis-humans. In some versions this is correlated with the time when, due to the inexorable advance of Moore’s Law, machine intelligence surpasses human intelligence and goes into a recursive cycle of self-improvement.

The original idea of the technological singularity is usually credited to the science fiction writer Vernor Vinge, though earlier antecedents can be found, for example in the writing of the British Marxist scientist J.D. Bernal. Even amongst transhumanists and singularitarianists there are different views about what might be meant by the singularity, but I don’t want to explore those here. Instead, I note this – when we talk of the technological singularity we’re using a metaphor, a metaphor borrowed from mathematics and physics. It’s the Singularity as a metaphor that I want to probe in this post.

A real singularity happens in a mathematical function, where for some value of the argument the result of the function is undefined. So a function like 1/(t-t0), as t gets closer and closer to t0, takes a larger and larger value until when t=t0, the result is infinite. Kurzweil’s thinking about technological advance revolves around the idea of exponential growth, as exemplified by Moore’s Law, so it’s worth making the obvious point that an exponential function doesn’t have a singularity. An exponentially growing function – exp(t/T) – certainly gets larger as t gets larger, and indeed the absolute rate of increase goes up too, but this function never becomes infinite for any finite t.

An exponential function is, of course, what you get when you have a constant fractional growth rate – if you charge your engineers to make your machine or device 20% better every year, for as long as they are successful in meeting their annual target you will get exponential growth. To get a technological singularity from a Moore’s law-like acceleration of technology, the fractional rate of technological improvement must itself be increasing in time (let me leave aside for the moment my often expressed conviction that technology isn’t single thing, and that it makes no sense at all to imagine that there’s some simple scalar variable that can be used to describe “technological progress” in general).

It isn’t totally implausible that something like this should happen – after all, we use technology to develop more technology. Faster computers should help us design more powerful microprocessors. On the other hand, as the components of our microprocessors shrink, the technical problems we have to overcome to develop the technology themselves grow more intractable. The question is, do our more powerful tools outstrip the greater difficulty of our outstanding tasks? The past has certainly seen periods in which the rate of technological progress has undergone periods of acceleration, due to the recursive, self-reinforcing effects of technological and social innovation. This is one way of reading the history of the first industrial revolution, of course – but the industrial revolution wasn’t a singularity, because the increase of the rate of change wasn’t sustained, it merely settled down at a higher value. What isn’t at all clear is whether what is happening now corresponds even to a one-off increase in the rate of change, let alone the sustained and limitless increase in rate of change that is needed to produce a mathematical singularity. The hope or fear of singularitarians is that this is about to change through the development of true artificial intelligence. We shall see.

Singularities occur in physics too. Or, to be more precise, they occur in the theories that physicists use. When we ask physics to calculate the self-energy of an electron, say, or the structure of space-time at the centre of a black hole, we end up with mathematical bad behaviour, singularities in the mathematics of the theories we are using. Does this mathematical bad behaviour correspond to bad behaviour in the physical world, or is it simply alerting us to the shortcomings of our understanding of that physical world? Do we really see infinity in the singularity – or is it just a signal to say we need different physics? Some argue it’s the latter, and here’s an example from my own field to illustrate why one might think that.
The great physicist Sam Edwards (who died a month ago) made his name and founded the branch of physics I’ve worked in, by realising that you could describe the statistical mechanics of polymer molecules with a theory that had the formal structure of the quantum field theories he himself learnt as a postdoc with Julian Schwinger.

Like those quantum field theories, Edwards’s theories of macromolecules produce some inconvenient, and unphysical, infinities that one has to work around. To Edwards, this was not a worry at all – as he was quoted as saying, “I know there are atoms down there, but I don’t care”. Edwards’s theories treated polymer molecules as wiggly worms that are wiggly on all scales, no matter how small. This works brilliantly if you want to know what’s happening on scales larger than the size of individual atoms, but it’s the existence of those very atoms that mean the polymer isn’t wiggly all the way down, as it were. So we don’t worry that the theory doesn’t work at scales smaller than atoms, and we know what the different physics is that we’d need to use to understand behaviour on those scales. In the quantum field theories that describe electrons and other sub-atomic particles, one might suspect that there’s some similar graininess that intervenes to save us from the bad mathematical behaviour of our theories, but we don’t yet know what new kind of theory might be needed below the Planck scale, where we think the graininess might set in.

The most notorious singularities in physics are the ones that are predicted to occur in the middle of black holes – here it is the equations of general relativity that predict divergent behaviour in the structure of space-time itself. But like other singularities in physics, what the mathematical singularity is signalling to us is that near the singularity, we have different physics, physics that we don’t yet understand. In this case the unknown is the physics of quantum gravity, where quantum mechanics meets general relativity. The singularity at the centre of a black hole is a double mystery; not only do we not understand what the new physics might be, but the phenomena of this physical singularity are literally unobservable, hidden by the event horizon which prevents us from seeing inside the black hole. The new physics beyond the Planck scale is unobservable, too, but for a different, less fundamental reason – the particle accelerators that we’d need to probe it would have to be unfeasibly huge in scale and energy, huge on scales that seem unattainable to humans with our current earth-bound constraints. Is it always a given that physical singularities are unobservable? Naked singularities are difficult to imagine, but don’t seem to be completely ruled out.

The biggest singularity in physics of all is the singularity where we think it all began – the Big Bang, a singularity in time which it is unimaginable to see through, just as the end of the universe in a big crunch provides a singularity in time which we can’t conceive of seeing beyond. Now we enter the territory of thinking about the creation of the universe and the ultimate end of the world, which of course have long been rich themes for religious speculation. This connects us back to the conception of a technologically driven singularity in human history, as a discontinuity in the quality of human experience and the character of human nature. I’ve already argued at length that this conception of the technological singularity is a metaphor that owes a great deal to these religious forbears.

So here we’re back at the metaphorical singularity – and perhaps metaphors are best left to creative writers. If we want a profound treatment of the metaphors of singularity, we should look, not to futurists, but to science fiction. I know of no more thought-provoking treatment of singularities and the singularity than that of M. John Harrison in his brilliant trilogy, “Light”, “Nova Swing” and “Empty Space”.

At the astrophysical centre of the trilogy is a vast, naked singularity. Bits of this drop off onto nearby planets, leading to ragged borders beyond which things are familiar but weirdly distorted, a ragged edge across which one can with some risk move back and forth, and which is crossed and recrossed by herds of inscrutable cats. The human narrative crosses back and forth between a near-present and a further future which feels very much post-singularity. This future is characterised by routine faster-than-light travel, “shadow operators” – disembodied pieces of code which find unexplained, nanobot like substrates to run on, radical and cheap genetic engineering leading to widespread, wholesale (and indeed retail) human modification. There is a fully realised nano-medicine, and widely available direct brain interfaces, one application of which turns humans into the cyborg controllers of the highest performing faster-than-light spaceships. And yet, the motivations that persuade a young girl to sign up to this irreversible transformation seem all too recognisable, and indeed the familiarity of this post-singularity world seems all too plausible.

Beyond the singularities, beyond the space opera setting and Harrison’s brilliant and stylish writing, the core of the trilogy concerns the ways people construct, and reconstruct, and sometimes fabricate, their own identities. It’s this theme that is claimed by transhumanism, but it’s one that seems to me to be very much more universal than that.

Does transhumanism matter?

April 7th, 2015

The political scientist Francis Fukuyama once identified transhumanism as the “the world’s most dangerous idea”. Perhaps a handful of bioconservatives share this view, but I suspect few others do. After all, transhumanism is hardly part of the mainstream. It has a few high profile spokesmen, and it has its vociferous adherents on the internet, but that’s not unusual. The wealth, prominence, and technical credibility of some of its sympathisers – drawn from the elite of Silicon Valley – does, though, differentiate transhumanism from the general run of fringe movements. My own criticisms of transhumanism have focused on the technical shortcomings of some of the key elements of the belief package – especially molecular nanotechnology, and most recently the idea of mind uploading. I fear that my critique hasn’t achieved much purchase. To many observers with some sort of scientific background, even those who share some of my scepticism of the specifics, the worst one might say about transhumanism is that it is mostly harmless, perhaps over-exuberant in its claims and ambitions, but beneficial in that it promotes a positive image of science and technology.

But there is another critique of transhumanism, which emphasises not the distance between transhumanism’s claims and what is technologically plausible, as I have done, but the continuity between the way transhumanists talk about technology and the future and the way these issues are talked about in the mainstream. In this view, transhumanism matters, not so much for its strange ideological roots and shaky technical foundations, but because it illuminates some much more widely held, but pathological, beliefs about technology. The most persistent proponent of this critique is Dale Carrico, whose arguments are summarised in a recent article, Futurological Discourses and Posthuman Terrains (PDF). Although Carrico looks at transhumanism from a different perspective from me, the perspective of a rhetorician rather than an experimental scientist, I find his critique deserving of serious attention. For Carrico, transhumanism distorts the way we think about technology, it contaminates the way we consider possible futures, and rather than being radical it is actually profoundly conservative in the way in which it buttresses existing power structures.

Carrico’s starting point is to emphasise that there is no such thing as technology, and as such it makes no sense to talk about whether one is “for” or “against” technology. On this point, he is surely correct; as I’ve frequently written before, technology is not a single thing that is advancing at a single rate. There are many technologies, some are advancing fast, some are neglected and stagnating, some are going backwards. Nor does it make sense to say that technology is by itself good or bad; of the many technologies that exist or are possible, some are useful, some not. Or to be more precise, some technologies may be useful to some groups of people, they may be unhelpful to other groups of people, or their potential to be helpful to some people may not be realised because of the political and social circumstances we find ourselves in. Read the rest of this entry »

Does radical innovation best get done by big firms or little ones?

March 5th, 2015

A recent blogpost by the economist Diane Coyle quoted JK Galbraith as saying in 1952: “The modern industry of a few large firms is an excellent instrument for inducing technical change. It is admirably equipped for financing technical development and for putting it into use. The competition of the competitive world, by contrast, almost completely precludes technical development.” Coyle describes this as “complete nonsense” -“ big firms tend to do incremental innovation, while radical innovation tends to come from small entrants.” This is certainly conventional wisdom now – but it needs to be challenged.

As a point of historical fact, what Galbraith wrote in 1952 was correct – the great, world-changing innovations of the postwar years were indeed the products, not of lone entrepreneurs, but of the giant R&D departments of big corporations. What is true is that in recent years we’ve seen radical innovations in IT which have arisen from small entrants, of which Google’s search algorithm is the best known example. But we must remember two things. Digital innovations like these don’t exist in isolation – they only have an impact because they can operate on a technological substrate which isn’t digital, but physical. The fast, small and powerful computers, the world-wide communications infrastructure that digital innovations rely on were developed, not in small start-ups, but in large, capital intensive firms. And many of the innovations we urgently need – in areas like affordable low carbon energy, grid-scale energy storage, and healthcare for ageing populations – will not be wholly digital in character. Technologies don’t all proceed at the same pace (as I discussed in an earlier post – Accelerating change or innovation stagnation). In focusing on the digital domain, in which small entrants can indeed achieve radical innovations (as well as some rather trivial ones), we’re in danger of failing to support the innovation in the material and biological domains, which needs the long-term, well-resourced development efforts that only big organisations can mobilise. The outcome will be a further slowing of economic growth in the developed world, as innovation slows down and productivity growth stalls.

So what were the innovations that the sluggish big corporations of the post-war world delivered? Jet aircraft, antibiotics, oral contraceptives, transistors, microprocessors, Unix, optical fibre communications and mobile phones are just a few examples. Read the rest of this entry »

Growth, technological innovation, and the British productivity crisis

January 28th, 2015

The biggest current issue in the UK’s economic situation is the continuing slump in productivity. It’s this poor productivity performance that underlies slow or no real wage growth, and that also contributes to disappointing government revenues and consequent slow progress reducing the government deficit. Yet the causes of this poor productivity performance are barely discussed, let alone understood. In the long-term, productivity growth is associated with innovation and technological progress – have we stopped being able to innovate? The ONS has recently released a set of statistics which potentially throw some light on the issue. These estimates of total factor productivity – productivity controlled for inputs of labour and capital – make clear the seriousness of the problem.

Multifactor productivity, whole economy, ONS estimates.

Total factor productivity relative to 1994, whole economy, ONS estimates

Here are the figures for the whole economy. They show that, up to 2008, total factor productivity grew steadily at around 1% a year. Then it precipitously fell, losing more than a decade’s worth of growth, and it continues to fall. This means that each year since the financial crisis, on average we have had to work harder or put in more capital to achieve the same level of economic output. A simple-minded interpretation of this would be that, rather than seeing technological progress being reflected in economic growth, we’re going backwards, we’re technologically regressing, and the only economic growth we’re seeing is because we have a larger population working longer hours.

Of course, things are more complicated than this. Many different sectors contribute to the economy – in some, we see substantial innovation and technological progress, while in others the situation is not so good. It’s the overall shape of the economy, the balance between growing and stagnating sectors, that contributes to the whole picture. The ONS figures do begin to break down total factor productivity growth into different sectors, and this begins to give some real insight into what’s wrong with the UK’s economy and what needs to be done to right it. Before I come to those details, I need to say something more about what’s being estimated here.

Where does sustainable, long term economic growth come from? Read the rest of this entry »

Science, Politics, and the Haldane Principle

January 5th, 2015

The UK government published a new Science and Innovation Strategy just before Christmas, in circumstances that have led to a certain amount of comment (see, for example, here and here). There’s a lot to be said about this strategy, but here I want to discuss just one aspect – the document’s extended references to the Haldane Principle. This principle is widely believed to define, in UK science policy, a certain separation between politics and science, taking detailed decisions about what science to fund out of the hands of politicians and entrusting them to experts in the Research Councils, at arms’ length from the government. The new strategy reaffirms an adherence to the Haldane Principle, but it does this in a way that will make some people worry that an attempt is being made to redefine it, to allow more direct intervention in science funding decisions by politicians in Whitehall. No-one doubts that the government of the day has, not just a right, but a duty, to set strategic directions and priorities for the science the government funds. What’s at issue are how to make the best decisions, underpinned by the best evidence, for what by definition are the uncertain outcomes of research.

The key point to recognize about the Haldane Principle is that it is – as the historian David Edgerton pointed out – an invented tradition. Read the rest of this entry »

Responsible innovation and irresponsible stagnation

November 16th, 2014

This long blogpost is based on a lecture I gave at UCL a couple of weeks ago, for which you can download the overheads here. It’s a bit of a rough cut but I wanted to write it down while it was fresh in my mind.

People talk about innovation now in two, contradictory, ways. The prevailing view is that innovation is accelerating. In everyday life, the speed with which our electronic gadgets become outdated seems to provide supporting evidence for this view, which, taken to the extreme, leads to the view of Kurzweil and his followers that we are approaching a technological singularity. Rapid technological change always brings losers as well as unanticipated and unwelcome consequences. The question then is whether it is possible to innovate in a way that minimises these downsides, in a way that’s responsible. But there’s another narrative about innovation that’s growing in traction, prompted by the dismally poor economic growth performance of the developed economies since the 2008 financial crisis. In this view – perhaps most cogently expressed by economic Tyler Cowen – slow economic growth is reflecting a slow-down in technological innovation – a Great Stagnation. A slow-down in the rate of technological change may reassure conservatives worried about the downsides of rapid innovation. But we need technological innovation to help us overcome our many problems, many of them caused in the first place by the unforeseen consequences of earlier waves of innovation. So our failure to innovate may itself be irresponsible.

What irresponsible innovation looks like

What could we mean by irresponsible innovation? We all have our abiding cultural image of a mad scientist in a dungeon laboratory recklessly pursuing some demonic experiment with a world-consuming outcome. In nanotechnology, the idea of grey goo undoubtedly plays into this archetype. What if a scientist were to succeed in making self-replicating nanobots, which on escaping the confines of the laboratory proceeded to consume the entire substance of the earth’s biosphere as they reproduced, ending human and all other life on earth for ever? I think we can all agree that this outcome would be not wholly desirable, and that its perpetrators might fairly be accused of irresponsibility. But we should also ask ourselves how likely such a scenario is. I think it is very unlikely in the coming decades, which leaves for me questions about whose purposes are served by this kind of existential risk discourse.

We should worry about the more immediate implications of genetic modification and synthetic biology, for example in their potential to make existing pathogens more dangerous, to recreate historical pathogenic strains, or even to create entirely new ones. Read the rest of this entry »

What the UK government should do about science and innovation

November 12th, 2014

I have a new post up at the Sheffield Political Economy Research Institute’s blog – Rebuilding the UK’s innovation economy. It’s a more tightly edited version of my earlier post on Soft Machines with the same title.

Lecture on responsible innovation and the irresponsibility of not innovating

November 4th, 2014

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.

Your mind will not be uploaded

September 14th, 2014

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Read the rest of this entry »

Transhumanism has never been modern

August 24th, 2014

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

Transhumanism is an ideology, a movement, or a belief system, which predicts and looks forward to a future in which an increasing integration of technology with human beings leads to a qualititative, and positive, change in human nature. Read the rest of this entry »