Where the randomness comes from

For perhaps 200 years it was possible to believe that physics gave a picture of the world with no place for randomness. Newton’s laws prescribe a picture of nature that is completely deterministic – at any time, the future is completely specified by the present. For anyone attached to the idea that they have some control over their destiny, that the choices they make have any influence on what happens to them, this seems problematic. Yet the idea of strict physical determinism, the idea that free will is an illusion in a world in which the future is completely predestined by the laws of physics, remains strangely persistent, despite the fact that it isn’t (I believe) supported by our current scientific understanding.

The mechanistic picture of a deterministic universe received a blow with the advent of quantum mechanics, which seems to introduce an element of randomness to the picture – in the act of “measurement”, the state function of a quantum system discontinuously changes according to a law which is probabilistic rather than deterministic. And when we look at the nanoscale world, at least at the level of phenomenology, randomness is ever-present, summed up in the phenomenon of Brownian motion, and leading inescapably to the second law of thermodynamics. And, of course, if we are talking about human decisions (should we go outside in the rain, or have another cup of tea?) the physical events in the brain that initiate the process of us opening the door or putting the kettle on again are strongly subject to this randomness; those physical events, molecules diffusing across synapses, receptor molecules changing shape in response to interactions with signalling molecules, shock waves of potential running up membranes as voltage-gated pores in the membrane open and close, all take place in that warm, wet, nanoscale domain in which Brownian motion dominates and the dynamics is described by Langevin equations, complete with their built-in fluctuating forces. Is this randomness real, or just an appearance? Where does it come from?

I suspect the answer to this question, although well-understood, is not necessarily widely appreciated. It is real randomness – not just the appearance of randomness that follows from the application of deterministic laws in circumstances too complex to model – and its ultimate origin is indeed in the indeterminism of quantum mechanics. To understand how the randomness of the quantum realm gets transmitted into the Brownian world, we need to remember first that the laws of classical, Newtonian physics are deterministic, but only just. If we imagine a set of particles interacting with each other through well-known forces, defined through potentials of the kind you might use in a molecular dynamics simulation, the way in which the system evolves in time is in principle completely determined, but in practise any small perturbation to the deterministic laws (such as a rounding error in a computer simulation) will have an effect which grows with time to widen the range of possible outcomes that the system will explore, a widening that macroscopically we’d interpret as an increase in the entropy of the system.

To understand where, physically, this perturbation might come from we have to ask where the forces between molecules originate, as they interact and bounce off each other. One ubiquitous force in the nanoscale world is known to chemists as the Van der Waals force. In elementary physics and chemistry, this is explained as a force that arises between two neutral objects when a randomly arising dipole in one object induces an opposite dipole in the other object, and the two dipoles then attract each other. Another, perhaps deeper, way of thinking about this force is due to the physicists Casimir and Lifshitz, who showed that it arises from the way objects modify the quantum fluctuations that are always present in the vacuum – the photons that come in and out of existence even in the emptiest of empty spaces. This way of thinking about the Van der Waals force makes clear that because the force arises from the quantum fluctuations of the vacuum, the force must itself be fluctuating – it has an intrinsic randomness that is sufficient to explain the randomness we observe in the nanoscale world.

So, to return to the question of whether free will is compatible with physical determinism, we can now see that this is not an interesting question, because rules that govern the operation of the brain are fundamentally not deterministic. Of course, the question of how free will might emerge from a non-deterministic, stochastic system isn’t of course a trivial question either, but at least it starts from the right premise – we can say categorically that strict physical determinism, as applied to the operation of the brain, is false. The brain is not a deterministic system, but one in which randomness is central and inescapable to its operation.

One might go on to ask why some people are so keen to hold on to the idea of strict physical determinism, more than a hundred years after the discoveries of quantum mechanics and statistical mechanics that makes determinism untenable? This is too big a question for me to even attempt to answer here, but maybe it’s worth pointing out that there seems to be quite a lot of of determinism around – in addition to physical determinism, genetic determinism and technological determinism seem to be attractive to many people at the moment. Of course, the rise of the Newtonian mechanistic world-view occurred at a time when a discussion about the relationship between free will and a theological kind of determinism was very current in Christian Europe, and I’m tempted to wonder whether the appeal of these modern determinisms might be part of the lingering legacy of Augustine of Hippo and Calvin to the modern age.

Science in hard times

How should the hard economic times we’re going through affect the amount of money governments spend on scientific and technological research? The answer depends on your starting point – if you think that science is an optional extra that we do if we’re prosperous, then decreasing prosperity must inevitably mean we can afford to do less science. But if you think that our prosperity depends on the science we do, then if growth is starting to stall, that’s a signal telling you to devote more resources to research. This is a huge oversimplification, of course; the link between science and prosperity can never be automatic. How effective that link will be will depend on the type of science and technology you support, and on the nature of the wider economic system that translates innovations into economic growth. It’s worth taking a look at recent economic history to see some of the issues at play.

Plot of UK real GDP per person and government R&D spend
UK Government spending on research and development compared to the real growth in per capita GDP.

R&D data (red) from the Royal Society Report The Scientific Century adjusted to constant 2005 £s. GDP per person data (blue) from Measuring Worth. Dotted blue line – current projections from the November 2011 forecast of the UK Office of Budgetary Responsibility (uncorrected for population changes).

The graph shows both the real GDP per person in the UK from 1946 up to the present, together with the amount of money, again in real terms, spent by the government on research and development. The GDP graph tells an interesting story in itself, making very clear the discontinuity in economic policy that happened in 1979. In this year Margaret Thatcher’s new Conservative government overthrew a thirty year broad consensus, shared by both parties, on how the economy should be managed. Before 1979, we had a mixed economy, with substantial industrial sectors under state control, highly regulated financial markets, including controls on the flow of capital in and out of the country, and the macro-economy governed by the principles of Keynesian demand management. After 1979, it was not Keynes, but Hayek, who supplied the intellectual underpinning, and we saw progressive privatisation of those parts of the economy under state control, the abolition of controls on capital movements and deregulation of financial markets. In terms of economic growth, measured in real GDP per person, the period between 1946 and 1979 was remarkable, with a steady increase of 2.26% per year – this is, I think, the longest sustained period of high growth in the modern era. Since 1979, we’ve seen a succession of deep recessions, followed by periods of rapid, and evidently unsustainable growth, sustained by asset price bubbles. The peaks of these periods of growth have barely attained the pre-1979 trend line, while in our current economic travails we find ourselves about 9% below trend. Not only does there seem no imminent prospect of the rapid growth we’d need to return to that trend line, but there now seems to be a likelihood of another recession.

The plot for public R&D spending tells its own story, which also shows a turning point with the Thatcher government. From 1980 until 1998, we see a substantial long-term decline in research spending, not just as a fraction of GDP, but in absolute terms; since 1998 research spending has increased again in real terms, though not substantially faster than the rise in GDP over the same period. Underlying the decline were a number of factors. There was a real squeeze on spending in research in Universities, well remembered by those who were working in them at the time. Meanwhile the research spending in those industries that were being privatised – such as telecommunications and energy – was removed from the government spending figures. And the activities of government research laboratories – particularly those associated with defense and the nuclear industry – were significantly wound down. Underlying this winding down of research was both a political motive and an ideological one. Big government spending on high technology was associated with the corporate politics of the 1960’s, subscribed to by both parties but particularly associated with Labour, and the memorable slogan “The White Heat of Technology”. To its detractors this summoned up associations with projects like the supersonic passenger aircraft Concord, a technological triumph but a commercial disaster. To the adherents of the Hayekian free market ideology that underpinned the Thatcher government, the state had no business doing any research but the most basic and far from market. In fact, state-supported research was likely to be not only less efficient and less effectively directed than research in the private sector, but by “squeezing out” such private sector research it would actually make the economy less efficient.

The idea that state support of research reduces support of research by the private sector by “squeezing out” remains attractive to free market ideologues, but the empirical evidence points to the opposite conclusion – state spending and private sector spending on research support each other, with increases in state R&D spending leading to increases in R&D by business (see for example Falk M (2006). What drives business research and development intensity across OECD countries? (PDF), Applied Economics 38 p 533). Certainly, in the UK, the near-halving of government R&D spend between 1980 and 1999 did not lead to an increase in R&D by business; instead, this also fell from about 1.4% of GDP to 1.2%. Not only did those companies that had been privatised substantially reduce their R&D spending, but other major players in industrial R&D – such as the chemical company ICI and the electronics company GEC – substantially cut back their activities. At the time many rationalised this as the inevitable result of the UK economy changing its mix of sectors, away from manufacturing towards service sectors such as the financial service industry.

None of this answers the questions: how much should one spend on R&D, and what difference do changes in R&D spend make to economic performance? It is certainly clear that the decline in R&D spending in the UK isn’t correlated with any improvement in its economic performance. International comparisons show that the proportion of GDP spent on R&D in the UK is significantly lower than most of its major competitors, and within this the proportion of R&D supported by business is itself unusually low . On the other hand, the performance of the UK science base, as measured by academic measures rather than economic ones, is strikingly good. Updating a much-quoted formula, the UK accounts for 3% of the total world R&D spend, it has 4.3% of the world’s researchers, who produce 6.4% of the world’s scientific articles, which attract 10.9% of the world’s citations and produce 13.8% of the world’s top 1% of highly cited papers (these figures come from the analysis in the recent report The International Comparative Performance of the UK Research Base).

This formula is usually quoted to argue for the productivity and effectiveness of the UK research base, and it clearly tells a powerful story about its strength as measured in purely academic terms. But does this mean we get the best out of our research in economic terms? The partial recovery in government R&D spending that we saw from 1998 until last year brought real terms increases in science budgets (though without significantly increasing the fraction of GDP spent on science). These increases were focused on basic research, whose importance as a proportion of total government science spending doubled between 1986 and 2005. This has allowed us to preserve the strength of our academic research base, but the decline in more applied R&D in both government and industrial laboratories has weakened our capacity to convert this strength into economic growth.

Our national economic experiment in deregulated capitalism ended in failure, as the 2008 banking collapse and subsequent economic slump has made clear. I don’t know how much the systematic running down of our national research and development capability in the 1980’s and 1990’s contributed to this failure, but I suspect that it’s a significant part of the bigger picture of misallocation of resources associated with the booms and the busts, and the associated disappointingly slow growth in economic productivity.

What should we do now? Everyone talks about the need to “rebalance the economy”, and the government has just released an “Innovation and Research Strategy for Growth”, which claims that “The Government is putting innovation and research at the heart of its growth agenda”. The contents of this strategy – in truth largely a compendium of small-scale interventions that have already been announced, which together still don’t fully reverse last year’s cuts in research capital spending – are of a scale that doesn’t begin to meet this challenge. What we should have seen is, not just a commitment to maintain the strength of the fundamental science base, important though that is, but a real will to reverse the national decline in applied research.

Why has the UK given up on nanotechnology?

In a recent roundup of nanotechnology activity across the world, the consultancy Cientifica puts the UK’s activity pretty much at the bottom of the class. Is this a fair reflection of the actual situation? Comparing R&D numbers across countries is always difficult, because of the different institutional arrangements and different ways spending is categorised; but, broadly, this feels about right. Currently, the UK has no actual on-going nanotechnology program. Activity continues in projects that are already established, but the current plans for government science spending in the period 2011- 2015, as laid out in the various research council documents, reveal no future role for nanotechnology. The previous cross-council program “Nanoscience engineering through application” has been dropped; all the cross-council programmes now directly reflect societal themes such as “ageing population, environmental change, global security, energy, food security and the digital economy”. The delivery plan for the Engineering and Physical Science Research Council, previously the lead council for nanotechnology, does not even mention the word, while the latest strategy document for the Technology Strategy Board, responsible for nearer-market R&D support, notes in a footnote that nanotechnology is “now embedded in all themes where there are such opportunities”.

So, why has the UK given up on nanotechnology? I suggest four reasons.

1. The previous government’s flagship nanotechnology program – the network of Micro- and Nano- Technology centres (the MNT program) is perceived as having failed. This program was launched in 2003, with initial funding of £90 million, a figure which subsequently was intended to rise to £200 million. But last July, the new science minister, David Willetts, giving evidence to the House of Commons Science and Technology Select Committee, picked on nanotechnology as an area in which funding had been spread too thinly, and suggested that the number of nanotechnology centres was likely to be substantially pruned. To my knowledge, none of these centres has received further funding. In designing the next phase of the government’s translational research centres – a new network of Technology and Innovation Centres, loosely modelled on the German Fraunhofer centres, it seems that the MNT program has been regarded as a cautionary tale of how not to do things, rather than an example to build on, and nanotechnology in itself will play little part in these new centres (though, of course, it may well be an enabling technology for things like a regenerative medicine).

2. There has been no significant support for nanotechnology from the kinds of companies and industries that government listens to. This is partly because the UK is now weak in those industrial sectors that would be expected to be most interested in nanotechnology, such as the chemicals industry and the electronics industry. Large national champions in these sectors with the power to influence government, in the way that now-defunct conglomerates like ICI and GEC did in the past, are particularly lacking. Companies selling directly to consumers, in the food and personal care sectors, have been cautious about being too closely involved in nanotechnology for fear of a consumer backlash. The pharmaceutical industry, which is still strong in the UK, has other serious problems to deal with, so nanotechnology has been, for them, a second order issue. And the performance of small, start-up companies based on nanotechnology, such as Oxonica, has been disappointing. The effect of this was brought home to me in March 2010, when I met the then Science Minister, Lord Drayson, to discuss on behalf of the Royal Society the shortcomings of the latest UK Nanotechnology Strategy. To paraphrase his response, he said he knew the strategy was poor, but that was the fault of the nanotechnology community, which had not been able to get its act together to convince the government it really was important. He contrasted this with the space industry, which had been able to make what to him was a very convincing case for its importance.

3. The constant criticism that the government was receiving about its slow response to issues of the safety and environmental impact of nanotechnology was, I am sure, a source of irritation. The reasons for this slow response were structural, related to the erosion of support for strategic science within government (as opposed to the kind of investigator led science funded by the research councils – see this blogpost on the subject from Jack Stilgoe), but in this environment civil servants might be forgiven for thinking that this issue had more downside than upside.

4. Within the scientific community, there were few for whom the idea of nanotechnology was their primary loyalty. After the financial crisis, when it was clear that big public spending cuts were likely and their were fears of very substantial cuts in science budgets, it was natural for scientists either to lobby on behalf of their primary disciplines or to emphasise the direct application of their work to existing industries with strong connections to government, like the pharmaceutical and aerospace industries. In this climate, the more diffuse idea of nanotechnology slipped down a gap.

Does it matter that, in the UK, nanotechnology is no longer a significant element of science and innovation policy? On one level, one could argue that it doesn’t. Just because nanotechnology isn’t an important category by which science is classified by, this doesn’t mean that the science that would formerly have been so classified doesn’t get done. We will still see excellent work being supported in areas like semiconductor nanotechnology for optoelectronics, plastic electronics, nano-enabled drug delivery and DNA nanotech, to give just a few examples. But there will be opportunities missed to promote interdisciplinary science, and I think this really does matter. In straitened times, there’s a dangerous tendency for research organisations to retreat to core business, to single disciplines, and we’re starting to see this happening now to some extent. Interdisciplinary, goal-oriented science is still being supported through the societal themes, like the programs in energy and ageing, and it’s going to be increasingly important that these themes do indeed succeed in mobilising the best scientists from different areas to work together.

But I worry that it very much does matter that the UK’s efforts at translating nanotechnology research into new products and new businesses has not been more successful. But this is part of a larger problem. The UK has, for the last thirty years, not only not had an industrial policy to speak of, it has had a policy of not having an industrial policy. But the last three years have revealed the shortcomings of this, as we realise that we aren’t any more going to be able to rely on a combination of North Sea oil and the ephemeral virtual profits of the financial services industry to keep the country afloat

What would a truly synthetic biology look like?

This is the pre-edited version of an article first published in Physics World in July 2010. The published version can be found here (subscription required). Some of the ideas here were developed in a little more technical detail in an article published in the journal Faraday Discussions, Challenges in Soft Nanotechnology (subscription required). This can be found in a preprint version here. See also my earlier piece Will nanotechnology lead to a truly synthetic biology?.

On the corner of Richard Feynman’s blackboard, at his death, was the sentence “What I cannot create, I do not understand”. This slogan has been taken as the inspiration for the emerging field of synthetic biology. Biologists are now unravelling the intricate and complex mechanisms that underlie life, even in its simplest forms. But, can we be said truly to understand biology, until it proves possible to create a synthetic life-form?

Craig Venter’s well-publicised program to replace the DNA in a simple microorganism with a new, synthetic genome has been widely reported as the moment when humans have created a new, synthetic living organism. This achievement was certainly a technical tour-de-force, but many would argue that just replacing the genome of an existing organism isn’t the same as creating a complete organism from the bottom up. Making a truly synthetic biology, in which all the components and mechanisms are designed and made without the use of existing biological materials or parts, is a much more distant and challenging prospect. But it is this, hugely more ambitious, act of creation that would fulfil Feynman’s criterion for truly understanding even the simplest forms of life.

What we have learnt from biology is how similar all life is – when we study biology, we are studying the many diverse branches from a single trunk, huge and baroque variety on one hand, but all variants on a single basic theme based on DNA, RNA and proteins. We’d like to find some general rules, not just about the one particular biology we know about, but about all possible biologies. It is this more general understanding that will help us in one of science’s deepest questions – was the origin of life on earth a random and improbably event, or should we expect to find life all over the universe, perhaps on many of the the exo-planets we’re now discovering? Exo-biology has a practical difficulty, though – even if we can detect the signatures of alien life-forms, distance will make it difficult to study them in detail. So what better way of understanding alien life than trying to build it ourselves?

But we can’t start building life without having an understanding of what life is. The history of attempts to provide a succinct, water-tight definition of life is very long and rather inconclusive. There are some recurring themes, though. Many definitions focus on life’s ability to self-replicate and evolve and the ability of living organisms to maintain themselves by transforming external matter and free energy into their own components. The principle of living things as being autonomous agents – able to sense their environment and choose between actions on the basis of this information – is appealing. But while people may agree on the ingredients of a definition, putting these together to make one which is neither too exclusive nor too inclusive is difficult. (I very much like the discussion of this issue in Pier Luigi Luisi’s excellent book The emergence of life).

An experimental approach to the problem might change the question – instead of asking “what life is” we could ask “what life does”. Rather than asking for a waterproof definition of life itself, we can make progress by asking what sort of things living things do, and then consider how we might execute these functions experimentally. Here we’re thinking explicitly of biology as a series of engineering problems. Given the scale of the basic unit of biology – the cell – what we’re considering is essentially a form of nanotechnology.

But not all nanotechnologies are the same; we’re asking how to make functional machines and devices in an environment dominated by the presence of water, the effects of Brownian motion, and some subtle but important interactions between surfaces. This nanoscale physics – very different to the rules that govern macroscopic engineering – gives rise to some new design principles, much exploited in biological systems. These principles include the idea of self-assembly – molecules that put themselves together under the influence of Brownian motion and surface forces, constructing complex structures whose design is entirely encoded within the molecules themselves. This is one example of the mutability that is so characteristic of soft and biological matter – a shifting balance between weak interactions in the face of subtle changes in external conditions causes changes in the organisation and shape of molecules and assemblies of molecules in response to changes in the environment.

It’s quite difficult to imagine a living organism that doesn’t have some kind of closed compartment to separate the organism from its environment. Cells have membranes and walls of greater or lesser complexity, but at their simplest these are bags made from a double layer of phospholipid molecules, arranged so their hydrophobic tails are sandwiched between two layers of hydrophilic head groups. The synthetic analogue of these membranes are called liposomes; they are easily made and commonly used in cosmetics and drug delivery systems. Polymer chemists make analogues of phospholipids – amphiphilic block copolymers – which make bags called polymersomes which, in some respects, offer much more flexibility of design, often being more robust and allowing precise control of wall thickness. From such synthetic artificial bags, it is a short step to encapsulating systems of chemicals and biochemicals to mimic some kind of metabolism, and in some cases even some level of self-replication. What is more difficult is to be able to control the traffic in and out of the compartment; this ideally would require pores which only allowed certain types of molecules in and out, or that could be opened and closed on certain triggers.

It is this sensitivity to the environment that proves more complex to mimic synthetically. It’s still not generally appreciated how much information processing power is possessed even by the most apparently simple single celled organisms. This is because biological computing is carried out, not by electrons within transistors, but by molecules acting on other molecules. (Dennis Bray’s book Wetware is well worth reading on this subject). The key elements of this chemical logic are enzymes that perform logical operations, reacting the presence or absence of input molecules by synthesising, or not synthesising, output molecules.

Efforts to make synthetic analogues of this molecular logic are only at the earliest stages. What is needed is a molecule that changes shape in the presence of an input molecule, and for this shape change to turn on or off some catalytic activity. In biology, it is proteins that carry out this function; the only synthetic analogues made so far are built from DNA (see my earlier essay Molecular Computing for more details and references).

Given molecular logic elements whose outputs are other molecules, one can start to build networks linking many logic gates. In biology these networks integrate information about the cell’s environment and make decisions about different courses of action the cell can take – to swim towards food, or away from danger, for example.

In order for a bacteria sized object to be able to move – to swim through a fluid or crawl along a surface – it needs to solve some very interesting physics problems. For such a small object, it’s the viscosity of the fluid that dominates resistance to motion, in contrast to the situation at human scales, where it’s the inertia of the fluid that needs to be overcome. In these situations of very low Reynolds number new swimming strategies need to be found. Bacteria often use the beating motion of tiny threads – flagellae or ciliae – to push themselves forward. At Sheffield we’ve been exploring another way of making microscopic swimmers – catalysing a chemical reaction on one half of the particle, producing an asymmetric cloud of reaction products that pushes the particle forward by osmotic pressure (more details here. But even though we can make artificial swimmers, we still don’t know how to control and steer them.

By now it should be obvious that the task of creating a truly synthetic biology remains a very distant goal. The more that biologists discover –particularly now they can use the tools of single molecule biophysics to unravel the mechanisms of the sophisticated molecular machines within even the simplest types of organism – the cruder our efforts to mimic some of the features of cell biology seem. We do have a reasonable understanding of some important principles of nano-scale design – how to design macromolecules to make to self-assembled structures resembling cell membranes, for example. But other areas are still wide open, from the fundamental theoretical issues around how to understand small systems driven far from equilibrium, through the intricacies of mechanisms to achieve accurate self-replication, to the challenge of designing chemical computers. On a practical level, to cope with this level of complexity we’re probably going to have to do what Nature does, and use evolutionary design methods. But if the goal is distant, we’ll learn a great deal from trying. Even to speculate about what a truly synthetic life-form might look like is itself helpful in sharpening our notions of what we might consider to be alive. It is this kind of experimental approach that will help us to find out the physical principles that underlie biology – not just the biology we know about, but all possible biologies.

Accelerating change or innovation stagnation?

It’s conventional wisdom that the pace of innovation has never been faster. The signs of this seem to be all around us, as we rush to upgrade our smartphones and adopt yet another social media innovation. And yet, there’s another view emerging too, that all the easy gains of technological innovation have happened already and that we’re entering a period, if not of technological stasis, but of maturity and slow growth. This argument has been made most recently by the economist Tyler Cowen, for example in this recent NY Times article, but it’s prefigured in the work of technology historians David Edgerton and Vaclav Smil. Smil, in particular, points to the period 1870 – 1920 as the time of a great technological saltation, in which inventions such as electricity, telephones, internal combustion engines and the Haber-Bosch process transformed the world. Compared to this, he is rather scornful of the relative impact of our current wave of IT-based innovation. Tyler Cowen puts essentially the same argument in an engagingly personal way, asking whether the changes seen in his grandmother’s lifetime were greater than those he has seen in his own.

Put in this personal way, I can see the resonance of this argument. My grandmother was born in the first decade of the 20th century in rural North Wales. The world she was born into has quite disappeared – literally, in the case of the hill-farms she used to walk out to as a child, to do a day’s chores in return for as much buttermilk as she could drink. Many of these are now marked only by heaps of stones and nettle patches. In her childhood, medical care consisted of an itinerant doctor coming one week to the neighbouring village and setting up an impromptu surgery in someone’s front room; she vividly recalled all her village’s children being crammed into the back of a pony trap and taken to that room, where they all had their tonsils taken out, while they had the chance. It was a world without cars or lorries, without telephones, without electricity, without television, without antibiotics, without air travel. My grandmother never in her life flew anywhere, but by the time she died in 1994, she’d come to enjoy and depend on all the other things. Compare this with my own life. In my childhood in the 1960s we did without mobile phones, video games and the internet, and I watched a bit less television than my children do, but there’s nowhere near the discontinuity, the great saltation that my grandmother saw.

How can we square this perspective against the prevailing view that technological innovation is happening at an ever increasing pace? At its limit, this gives us the position of Ray Kurzweil, who identifies exponential or faster growth rates in technology and extrapolates these to predict a technological singularity.

The key mistake here is to think that “Technology” is a single thing, that by itself can have a rate of change, whether that’s fast or slow. There are many technologies, and at any given time some will be advancing fast, some will be in a state of stasis, and some may even be regressing. It’s very common for technologies to have a period of rapid development, with a roughly constant fractional rate of improvement, until physical or economic constraints cause progress to level off. Moore’s “law”, in the semiconductor industry, is a very famous example of a long period of constant fractional growth, but the increase in efficiency of steam engines in the 19th century followed a similar exponential path, until a point of diminishing returns was inevitably reached.

To make sense of the current situation, it’s perhaps helpful to think of three separate realms of innovation. We have the realm of information, the material realm, and the realm of biology. In these three different realms, technological innovation is subject to quite different constraints, and has quite different requirements.

It is in the realm of information that innovation is currently taking place very fast. This innovation is, of course, being driven by a single technology from the material realm – the microprocessor. The characteristics of innovation in the information world is that the infrastructure required to enable it is very small, a few bright people in a loft or garage with a great idea genuinely can build a world-changing business in a few years. But, the apparent weightlessness of this kind of innovation is of course underpinned by the massive capital expenditures and the focused, long-term research and development of the global semiconductor industry.

In the material world, things take longer and cost more. The scale-up of promising ideas from the laboratory needs attention to detail and the continuous, sequential solution of many engineering problems. This is expensive and time-consuming, and demands a degree of institutional scale in the organisations that do it. A few people in a loft might be able to develop a new social media site, but to build a nuclear power station or a solar cell factory needs something a bit bigger. The material world is also subject to some hard constraints, particularly in terms of energy. And the penalties for making mistakes in a chemical plant or a nuclear reactor or a passenger aircraft have consequences of a seriousness rarely seen in the information realm.

Technological innovation in the biological realm, as demanded by biomedicine and biotechnology, presents a new set of problems. The sheer complexity of biology makes a full mechanistic understanding hard to achieve; there’s more trial and error and less rational design than one would like. And living things and living systems are different and fundamentally more difficult to engineer than the non-living world; they have agency of their own and their own priorities. So they can fight back, whether that’s pathogens evolving responses to new antibiotics or organisms reacting to genetic engineering in ways that thwart the designs of their engineers. Technological innovation in the biological realm carries high costs and very substantial risks of failure, and it’s not obvious that we have the right institutions to handle this. One manifestation of these issues is the slowness of new technologies like stem cells and tissue engineering to deliver, and we’re now seeing the economic and business consequences in an unfolding crisis of innovation in the pharmaceutical sector.

Can one transfer the advantages of innovation in the information realm to the material realm and the biological realm? Interestingly, that’s exactly the rhetorical claim made by the new disciplines of nanotechnology and synthetic biology. The claim of nanotechnology is that by achieving atom-by-atom control, we can essentially reduce the material world to the digital. Likewise, the power of synthetic biology is claimed to be that it can reduce biotechnology to software engineering. These are powerful and seductive claims, but wishing it to be so doesn’t make it happen, and as yet the rhetoric has yet to be fully matched by achievement. Instead, we’ve seen some disappointment – some nanotechnology companies have disappointed investors, who hadn’t realised that, in order to materialise the clever nanoscale design of the products, the constraints of the material realm still apply. A nanoparticle may be designed digitally, but it’s still a speciality chemical company that has to make it.

Our problem is that we need innovation in all three realms; we can’t escape the fact that we live in the material world, we depend on our access to energy, for example, and fast progress in one realm can’t fully compensate for slower progress in the other areas. We still need technological innovation in the material and biological realms – we must develop better technologies in areas like energy, because the technologies we have are not sustainable and not good enough. So even if accelerating change does prove to be a mirage, we still can’t afford innovation stagnation.

The next twenty-five years

The Observer ran a feature today collecting predictions for the next twenty five years from commentators about politics, science, technology and culture. I contributed a short piece on nanotechnology: I’m not expecting a singularity. Here’s what I wrote:

Twenty years ago Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product virtually for free, to have computers so powerful that they will surpass human intelligence, and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish aging and death.

I don’t think Kurzweil’s “technological singularity” – a dream of scientific transcendence which echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and “the rapture of the nerds”. But nanotechnology will lead to some genuinely transformative new applications.

New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we’re used to in the developed world. We’ll learn more about intervening in our biology at the sub-cellular level, and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, like Alzheimer’s, that will increasingly afflict our population as it ages. The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another twenty five years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines, and unashamedly focused on applications rather than fundamental understanding. Given the huge resources being directed towards nanotechnology in China and its neighbours, this may be the first major technology of the modern era that is predominantly developed outside the USA and Europe.

If the technology we’ve got isn’t sustainable, doesn’t that mean we need better technology?

Friends of the Earth have published a new report called “Nanotechnology, climate and energy: over-heated promises and hot air?” (here but the website was down when I last looked). As its title suggests, it expresses scepticism about the idea that nanotechnology can make a significant contribution to making our economy more sustainable. It does make some fair points about the distance between rhetoric and reality when it comes to claims that nano-manufacturing can be intrinsically cleaner and more precise than conventional processing (the reality being, of course, that the manufacturing processes used to make nanomaterials are not currently very much different to processes to make existing materials). It also expresses scepticism about ideas such as the hydrogen economy, which I to some extent share. But I think its position betrays one fundamental and very serious error. That is the comforting, but quite wrong, belief that there is any possibility of moving our current economy to a sustainable basis with existing technology in the short term (i.e. in the next ten years).

Take, for example, solar energy. I’m extremely positive about its long term prospects. At the moment, the world uses energy at a rate of about 16 Terawatts (a TW is one thousand Gigawatts; one GW is about the scale of a medium size power station). The total energy arriving at the earth from the sun is 162,000 TW – so there is, in principle, an abundance of solar energy. But the total world amount of installed solar capacity is just over 2 GW (the nominal world installed capacity was, in 2008, 13.8 GW, which represents a real output of around 2 GW, having accounted for the lack of 24 hour sunshine and system losses. These numbers come from NREL’s 2008 Solar Technologies Market Report). This is four orders of magnitude less than the energy we need. It’s true that the solar energy industry is growing very fast – at annual rates of 40-50% at the moment. But even if this rate of increase went on for another 10 years, we would only have achieved a solar contribution of around 200 GW by 2010. Meanwhile, on even the most optimistic assumption, the IEA predicts that our total energy needs would have increased by 1400 GW in this period, so this isn’t enough even to halt the increase in our rate of burning fossil fuels, let alone reverse it. And, without falls in cost from the current values of around $5 per installed Watt, by 2020 we’d need to be spending about $2.5 trillion a year to achieve this rate of growth, at which point solar would still only be supplying around 1 % of world energy demand.

What this tells us is that though our existing technology for harvesting solar energy may be good in many ways – it’s efficient and long-lasting – it’s too expensive and in need of a step-change in the areas in which it can be produced. That’s why new solar cell technology is needed – and why those candidates which use nanotechnologies to enable large scale, roll to roll processing are potentially attractive. We know that currently these technologies aren’t ready for the mass market – their efficiencies and lifetimes aren’t good enough yet. And incremental developments of conventional silicon solar cells may yet surprise us and bring their costs down dramatically, and that would be a very good outcome too. But this is why research is needed. For perspective, look at this helpful graphic to see how the efficiencies of all solar cells have evolved with time. Naturally, the most recently invented technologies – such as the polymer solar cells – have progressed less far than the more mature technologies that are at market.

A similar story could be told about batteries. It’s clear that the use of renewables on a large scale will need large scale energy storage methods to overcome problems of intermittency, and the electrification of transport will need batteries with high specific energy (for a recent review of the requirements for plug-in hybrids see here). Currently available lithium ion batteries have a specific energy of about half a megajoule per kilogram, a fraction of the energy density of petrol (44 MJ/kg). They’re also too expensive and their lifetime is too short – they deteriorate at a rate of about 2% a year. Once again, current technology is simply not good enough, and it’s not getting better fast enough; new technology is needed, and this will almost certainly require better control of nanostructure.

Could we, alternatively, get by using less energy? Improving energy efficiency is certainly worth doing, and new technology can help here too. But substantial reductions in energy use will be associated with drops in living standards which, in rich countries, are going to be a hard sell politically. The politics of persuading poorer countries that they should forgo economic growth will be even trickier, given that, unlike the rich countries, they haven’t accumulated the benefit of centuries of economic growth fueled by cheap fossil-fuel based energy, and they don’t feel responsible for the resulting accumulation of atmospheric carbon dioxide. Above all, we mustn’t underestimate the degree to which, not just our comfort, but our very existence depends on cheap energy – notably in the high energy inputs needed to feed the world’s population. This is the hard fact that we have to face – we are existentially dependent on the fossil-fuel based technology we have now, but we know this technology isn’t sustainable and we don’t yet have viable replacements. In these circumstances we simply don’t have a choice but to try and find better, more sustainable energy technologies.

Yes, of course we have to assess the risks of these new technologies, of course we need to do the life-cycle analyses. And while Friends of the Earth may say they’re shocked (shocked!) that nanotechnology is being used by the oil industry, this seems to me to be either a rather disingenuous piece of rhetoric, or an expression of supreme naiveity about the nature of capitalism. Naturally, the oil industry will be looking at new technology such as nanotechnology to help their business; they’ve got lots of money and some pressing needs. And for all I know, there may be jungle labs in Colombia looking for applications of nanotechnology in the recreational pharmaceuticals sector right now. I can agree with FoE that it was unconvincing to suggest that there was something inherently environmental benign about nanotechnology, but it’s equally foolish to imply that, because the technology can be used in industries that you disapprove of, that makes it intrinsically bad. What’s needed instead is a realistic and hard-headed assessment of the shortcomings of current technologies, and an attempt to steer potentially helpful emerging new technologies in beneficial directions.

Feynman, Waldo and the Wickedest Man in the World

It’s been more than fifty years since Richard Feynman delivered his lecture “Plenty of Room at the Bottom”, regarded by many as the founding vision statement of nanotechnology. That foundational status has been questioned, most notably by Chris Tuomey in his article Apostolic Succession (PDF). In another line of attack, Colin Milburn, in his book Nanovision, argues against the idea that the ideas of nanotechnology emerged from Feynman’s lecture as the original products of his genius; instead, according to Milburn, Feynman articulated and developed a set of ideas that were already current in science fiction. And, as I briefly mentioned in my report from September’s SNET meeting, according to Milburn, the intellectual milieu from which these ideas emerged had some very weird aspects.

Milburn describes some of science fiction antecedents of the ideas in “Plenty of Room” in his book. Perhaps the most direct link can be traced for Feynman’s notion of remote control robot hands, which make smaller sets of hands, which can be used to be made yet smaller ones, and so on. The immediate source of this idea is Robert Heinlein’s 1942 novella “Waldo”, in which the eponymous hero devises just such an arrangement to carry out surgery on the sub-cellular level. There’s no evidence that Feynman had read “Waldo” himself, but Feynman’s friend Al Hibbs certainly had. Hibbs worked at Caltech’s Jet Propulsion Laboratory, and he had been so taken by Heinlein’s idea of robot hands as a tool for space exploration that he wrote up a patent application for it (dated 8 February 1958). Ed Regis, in his book “Nano”, tells the story, and makes the connection to Feynman, quoting Hibbs as follows: “It was in this period, December 1958 to January 1959, that I talked it over with Feynman. Our conversations went beyond my “remote manipulator” into the notion of making things smaller … I suggested a miniature surgeon robot…. He was delighted with the notion.”

“Waldo” is set in a near future, where nuclear derived energy is abundant, and people and goods fly around in vessels powered by energy beams. The protagonist, Waldo Jones, is a severely disabled mechanical genius (“Fat, ugly and hopelessly crippled” as it says on the back of my 1970 paperback edition) who lives permanently in an orbiting satellite, sustained by the technologies he’s developed to overcome his bodily weaknesses. The most effective of these technologies are the remote controlled robot arms, named “waldos” after their inventor. The plot revolves around a mysterious breakdown of the energy transmission system, which Waldo Jones solves, assisted by the sub-cellular surgery he carries out with his miniaturised waldos.

The novella is dressed up in the apparatus of hard science fiction – long didactic digressions, complete with plausible-sounding technical details and references to the most up-to-date science, creating the impression of that its predictions of future technologies are based on science. But, to my surprise, the plot revolves around, not science, but magic. The fault in the flying machines is diagnosed by a back-country witch-doctor, and involves a failure of will by the operators (itself a consequence of the amount of energy being beamed about the world). And the fault can itself be fixed by an act of will, by which energy in a parallel, shadow universe can be directed into our own world. Waldo Jones himself learns how to access the energy of this unseen world, and in this way overcomes his disabilities and fulfills his full potential as a brain surgeon, dancer and all round, truly human genius.

Heinlein’s background as a radio engineer explains where his science came from, but what was the source of this magical thinking? The answer seems to be the strange figure of Jack Parsons. Parsons was a self-taught rocket scientist, one of the founders of the Jet Propulsion Laboratory and a key figure in the early days of the USA’s rocket program (his story is told in George Pendle’s biography “Strange Angel”). But he was also deeply interested in magic, and was a devotee of the English occultist Aleister Crowley. Crowley, aka The Great Beast, was notorious for his transgressive interest in ritual magic – particularly sexual magic – and attracted the title “the wickedest man in the world” from the English newspapers in between the wars. He had founded a religion of his own, whose organisation, the Ordo Templi Orientis, promulgated his creed, summarised as “Do what thou wilt shall be the whole of the Law”. Parsons was inititated into the Hollywood branch of the OTO in 1941; in 1942 Parsons, now a leading figure in the OTO, moved the whole commune into a large house in Pasadena, where they lived according to Crowley’s transgressive law. Also in 1942, Parsons met Robert Heinlein at the Los Angeles Science Fiction Society, and the two men became good friends. Waldo was published that year.

The subsequent history of Jack Parsons was colourful, but deeply unhappy. He became close to another member of the circle of LA science fiction writers, L. Ron Hubbard, who moved into the Pasadena house in 1945 with catastrophic effects for Parsons. In 1952, Parsons died in a mysterious explosives accident in his basement. Hubbard, of course, went on to found a religion of his own, Scientology.

This is a fascinating story, but I’m not sure what it signifies, if anything. Colin Milburn wonders whether “it is tempting to see nanotech’s aura of the magical, the impossible made real, as carried through the Parsons-Heinlein-Hibbs-Feynman genealogy”. Sober scientists working in nanotechnology would argue that their work is as far away from magical thinking as one can get. But amongst those groups on the fringes of the science that cheer nanotechnology on – the singularitarians and transhumanists – I’m not sure that magic is so distant. Universal abundance through nanotechnology, universal wisdom through artificial intelligence, and immortal life through the defeat of ageing – these sound very much like the traditional aims of magic – these are parallels that Dale Carrico has repeatedly drawn attention to. And in place of Crowley’s Ordo Templi Orientis (and no doubt without some of the OTO’s more colourful practises), transhumanists have their very own Order of Cosmic Engineers, to “engineer ‘magic’ into a universe presently devoid of God(s).”

Computing with molecules

This is a pre-edited version of an essay that was first published in April 2009 issue of Nature Nanotechnology – Nature Nanotechnology 4, 207 (2009) (subscription required for full online text).

The association of nanotechnology with electronics and computers is a long and deep one, so it’s not surprising that a central part of the vision of nanotechnology has been the idea of computers whose basic elements are individual molecules. The individual transistors of conventional integrated circuits are at the nanoscale already, of course, but they’re made top-down by carving them out from layer-cakes of semiconductors, metals and insulators – what if one could make the transistors by joining together individual molecules? This idea – of molecular electronics – is an old one, which actually predates the widespread use of the term nanotechnology. As described in an excellent history of the field by Hyungsub Choi and Cyrus Mody (The Long History of Molecular Electronics, PDF) its origin can be securely dated at least as early as 1973; since then it has had a colourful history of big promises, together with waves of enthusiasm and disillusionment.

Molecular electronics, though, is not the only way of using molecules to compute, as biology shows us. In an influential 1995 review, Protein molecules as computational elements in living cells (PDF), Dennis Bray pointed out that the fundamental purpose of many proteins in cells seems to be more to process information than to effect chemical transformations or make materials. Mechanisms such as allostery permit individual protein molecules to behave as individual logic gates; one or more regulatory molecules bind to the protein, and thereby turn on or off its ability to catalyse a reaction. If the product of that reaction itself regulates the activity of another protein, one can think of the result as an operation which converts an input signal conveyed by one molecule into an output conveyed by another, and by linking together many such reactions into a network one builds a chemical “circuit” which in effect can carry out computational tasks of more or less complexity. The classical example of such a network is the one underlying the ability of bacteria to swim towards food or away from toxins. In bacterial chemotaxis, information from sensors about many different chemical species in the environment is integrated to produce the signals that control a bacterium’s motors, resulting in apparently purposeful behaviour.

The broader notion that much cellular activity can be thought of in terms of the processing of information by the complex networks involved in gene regulation and cell signalling has had a far-reaching impact in biology. The unravelling of these networks is the major concern of systems biology, while synthetic biology seeks to re-engineer them to make desired products. The analogies between electronics and systems thinking and biological systems are made very explicit in much writing about synthetic biology, with its discussion of molecular network diagrams, engineered gene circuits and interchangeable modules.

And yet, this alternative view of molecular computing has yet to make much impact in nanotechnology. Molecular logic gates have been demonstrated in a number of organic compounds, for example by the Belfast based chemist Prasanna de Silva; here ingenious molecular design can allow several input signals, represented by the presence or absence of different ions or other species, to be logically combined to produce outputs represented by optical fluorescence signals at different wavelengths. In one approach, a molecule consists of a fluorescent group is attached by a spacer unit to receptor groups; in the absence of bound species at the receptors, electron transfer from the receptor group to the fluorophore suppresses its fluorescence. Other approaches employ molecular shuttles – rotaxanes – in which physically linked but mobile molecular components move to different positions in response to changes in their chemical environment. These molecular engineering approaches are leading to sensors of increasing sophistication. But because the output is in the form of fluorescence, rather than a molecule, it is not possible to link many such logic gates into a network.

At the moment, it seems the most likely avenue for developing complex networks for information processing based on synthetic components will use nucleic acids, particularly DNA. Like other branches of the field of DNA nanotechnology, progress here is being driven by the growing ease and cheapness with which it is possible to synthesise specified sequences of DNA, together with the relative tractability of design and modelling of molecular interactions based on the base pair interaction. One demonstration from Erik Winfree’s group at Caltech uses this base pair interaction to design logic gates based on DNA molecules. These accept inputs in the form of short RNA strands, and output DNA strands according to the logical operations OR, AND or NOT. The output strands can themselves be used as inputs for further logical operations, and it is this that would make it possible in principle to develop complex information processing networks.

What should we think about using molecular computing for? The molecular electronics approach has a very definite target; to complement or replace conventional CMOS-based electronics, to ensure the continuation of Moore’s law beyond the point when physical limitations prevent any further miniaturisation of silicon-based. The inclusion of molecular electronics in the latest International Technology Roadmap for Semiconductors indicates the seriousness of this challenge, and molecular electronics and other related approaches, such as graphene-based electronics, will undoubtedly continue to be enthusiastically pursued. But these are probably not appropriate goals for molecular computing with chemical inputs and outputs. Instead, the uses of these technologies are likely to be driven by their most compelling unique selling point – the ability to interface directly with the biochemical processes of the cell. It’s been suggested that such molecular logic could be used to control the actions of a sophisticated drug device, for example. An even more powerful possibility is suggested by another paper (abstract, subscription required for full paper) from Christina Smolke (now at Stanford). In this work an RNA construct controls the in-vivo expression of a particular gene in response to this kind of molecular logic. This suggests the creation of what could be called molecular cyborgs – the result of a direct merging of synthetic molecular logic with the cell’s own control systems.

Society for the study of nanoscience and emerging technologies

Last week I spent a couple of days in Darmstadt, at the second meeting of the Society for the Study of Nanoscience and Emerging Technologies (S.NET). This is a relatively informal group of scholars in the field of Science and Technology Studies from Europe, the USA and some other countries like Brazil and India, coming together from disciplines like philosophy, political science, law, innovation studies and sociology.

Arie Rip (president of the society, and to many the doyen of European science and technology studies) kicked things off with the assertion that nanotechnology is, above all, a socio-political project, and the warning that this object of study was in the process of disappearing (a theme that recurred throughout the conference). Not to be worried by this prospect, Arie observed that their society could keep its acronym and rename itself the Society for the Study of Newly Emerging Technologies.

The first plenary lecture was from the French philosopher Bernard Stiegler, on Knowledge, Industry and Distrust at the Time of Hyperminiaturisation. I have to say I found this hard going; the presentation was dense with technical terms and delivered by reading a prepared text. But I’m wiser about it now than I was, thanks to a very clear and patient explanation from Colin Milburn over dinner that evening, who filled us in with the necessary background about Derrida’s intepretation of Plato’s pharmakon, and Simondon’s notion of disindividuation.

One highlight for me was a talk by Michael Bennett about changes in the intellectual property regime in the USA during the 1980’s and 1990’s. He made a really convincing case that the growth of nanotechnology went in parallel with a series of legal and administrative changes that amounted to a substantial intensification of the intellectual property regime in the USA. While some people think that developments in law struggle to keep up with science and technology, he argued instead that law bookends the development of technoscience, both shaping the emergence of the science and dominating the way it is applied. This growing influence, though, doesn’t help innovation. Recent trends, such as the tendency of research universities to patent early with very wide claims, and to seek exclusive licenses, aren’t helpful; we’re seeing the creation of “patent thickets”, such as the one that surrounds carbon nanotubes, which substantially add to the cost and increase uncertainty for those trying to commercialise technologies in this area. And there is evidence of an “anti-commons” effect, where other scientists are inhibited from working on systems when patents have been issued.

A round-table discussion on the influence of Feynman’s lecture “Plenty of Room at the Bottom” on the emergence of nanotechnology as a field produced some suprises too. I’m already familiar with Chris Tuomey’s careful demonstration that Plenty of Room’s status as the foundation of nanotechnology was largely granted retrospectively (see, for example, his article Apostolic Succession, PDF); Cyrus Mody‘s account of the influence it had on the then emerging field of microelectronics adds some shade to this picture. Colin Milburn made some comments that put Feynman’s lecture into the cultural context of its time; particularly in the debt it owed to science fiction stories like Robert Heinlein’s “Waldo”. And, to my great surprise, he reminded us just how weird the milieu of post-war Pasadena was; the very odd figure of Jack Parsons helping to create the Jet Propulsion Laboratory while at the same time conducting a programme of magic inspired by Aleister Crowley and involving a young L. Ron Hubbard. At this point I felt I’d stumbled out of an interesting discussion of a by-way of the history of science into the plot of an unfinished Thomas Pynchon novel.

The philosopher Andrew Light talked about how deep disagreements and culture wars arise, and the distinction between intrinsic and extrinsic objections to new technologies. This was an interesting analysis, though I didn’t entirely agree with his prescriptions, and a number of other participants were showing some some unease at the idea that the role of philosophers is to create a positive environment for innovation. My own talk was a bit of a retrospective, with the title “What has nanotechnology taught us about contemporary technoscience?” The organisers will be trying to persuade me to write this up for the proceedings volume, so I’ll say no more about this for the moment.