Accelerating change or innovation stagnation?

It’s conventional wisdom that the pace of innovation has never been faster. The signs of this seem to be all around us, as we rush to upgrade our smartphones and adopt yet another social media innovation. And yet, there’s another view emerging too, that all the easy gains of technological innovation have happened already and that we’re entering a period, if not of technological stasis, but of maturity and slow growth. This argument has been made most recently by the economist Tyler Cowen, for example in this recent NY Times article, but it’s prefigured in the work of technology historians David Edgerton and Vaclav Smil. Smil, in particular, points to the period 1870 – 1920 as the time of a great technological saltation, in which inventions such as electricity, telephones, internal combustion engines and the Haber-Bosch process transformed the world. Compared to this, he is rather scornful of the relative impact of our current wave of IT-based innovation. Tyler Cowen puts essentially the same argument in an engagingly personal way, asking whether the changes seen in his grandmother’s lifetime were greater than those he has seen in his own.

Put in this personal way, I can see the resonance of this argument. My grandmother was born in the first decade of the 20th century in rural North Wales. The world she was born into has quite disappeared – literally, in the case of the hill-farms she used to walk out to as a child, to do a day’s chores in return for as much buttermilk as she could drink. Many of these are now marked only by heaps of stones and nettle patches. In her childhood, medical care consisted of an itinerant doctor coming one week to the neighbouring village and setting up an impromptu surgery in someone’s front room; she vividly recalled all her village’s children being crammed into the back of a pony trap and taken to that room, where they all had their tonsils taken out, while they had the chance. It was a world without cars or lorries, without telephones, without electricity, without television, without antibiotics, without air travel. My grandmother never in her life flew anywhere, but by the time she died in 1994, she’d come to enjoy and depend on all the other things. Compare this with my own life. In my childhood in the 1960s we did without mobile phones, video games and the internet, and I watched a bit less television than my children do, but there’s nowhere near the discontinuity, the great saltation that my grandmother saw.

How can we square this perspective against the prevailing view that technological innovation is happening at an ever increasing pace? At its limit, this gives us the position of Ray Kurzweil, who identifies exponential or faster growth rates in technology and extrapolates these to predict a technological singularity.

The key mistake here is to think that “Technology” is a single thing, that by itself can have a rate of change, whether that’s fast or slow. There are many technologies, and at any given time some will be advancing fast, some will be in a state of stasis, and some may even be regressing. It’s very common for technologies to have a period of rapid development, with a roughly constant fractional rate of improvement, until physical or economic constraints cause progress to level off. Moore’s “law”, in the semiconductor industry, is a very famous example of a long period of constant fractional growth, but the increase in efficiency of steam engines in the 19th century followed a similar exponential path, until a point of diminishing returns was inevitably reached.

To make sense of the current situation, it’s perhaps helpful to think of three separate realms of innovation. We have the realm of information, the material realm, and the realm of biology. In these three different realms, technological innovation is subject to quite different constraints, and has quite different requirements.

It is in the realm of information that innovation is currently taking place very fast. This innovation is, of course, being driven by a single technology from the material realm – the microprocessor. The characteristics of innovation in the information world is that the infrastructure required to enable it is very small, a few bright people in a loft or garage with a great idea genuinely can build a world-changing business in a few years. But, the apparent weightlessness of this kind of innovation is of course underpinned by the massive capital expenditures and the focused, long-term research and development of the global semiconductor industry.

In the material world, things take longer and cost more. The scale-up of promising ideas from the laboratory needs attention to detail and the continuous, sequential solution of many engineering problems. This is expensive and time-consuming, and demands a degree of institutional scale in the organisations that do it. A few people in a loft might be able to develop a new social media site, but to build a nuclear power station or a solar cell factory needs something a bit bigger. The material world is also subject to some hard constraints, particularly in terms of energy. And the penalties for making mistakes in a chemical plant or a nuclear reactor or a passenger aircraft have consequences of a seriousness rarely seen in the information realm.

Technological innovation in the biological realm, as demanded by biomedicine and biotechnology, presents a new set of problems. The sheer complexity of biology makes a full mechanistic understanding hard to achieve; there’s more trial and error and less rational design than one would like. And living things and living systems are different and fundamentally more difficult to engineer than the non-living world; they have agency of their own and their own priorities. So they can fight back, whether that’s pathogens evolving responses to new antibiotics or organisms reacting to genetic engineering in ways that thwart the designs of their engineers. Technological innovation in the biological realm carries high costs and very substantial risks of failure, and it’s not obvious that we have the right institutions to handle this. One manifestation of these issues is the slowness of new technologies like stem cells and tissue engineering to deliver, and we’re now seeing the economic and business consequences in an unfolding crisis of innovation in the pharmaceutical sector.

Can one transfer the advantages of innovation in the information realm to the material realm and the biological realm? Interestingly, that’s exactly the rhetorical claim made by the new disciplines of nanotechnology and synthetic biology. The claim of nanotechnology is that by achieving atom-by-atom control, we can essentially reduce the material world to the digital. Likewise, the power of synthetic biology is claimed to be that it can reduce biotechnology to software engineering. These are powerful and seductive claims, but wishing it to be so doesn’t make it happen, and as yet the rhetoric has yet to be fully matched by achievement. Instead, we’ve seen some disappointment – some nanotechnology companies have disappointed investors, who hadn’t realised that, in order to materialise the clever nanoscale design of the products, the constraints of the material realm still apply. A nanoparticle may be designed digitally, but it’s still a speciality chemical company that has to make it.

Our problem is that we need innovation in all three realms; we can’t escape the fact that we live in the material world, we depend on our access to energy, for example, and fast progress in one realm can’t fully compensate for slower progress in the other areas. We still need technological innovation in the material and biological realms – we must develop better technologies in areas like energy, because the technologies we have are not sustainable and not good enough. So even if accelerating change does prove to be a mirage, we still can’t afford innovation stagnation.

The next twenty-five years

The Observer ran a feature today collecting predictions for the next twenty five years from commentators about politics, science, technology and culture. I contributed a short piece on nanotechnology: I’m not expecting a singularity. Here’s what I wrote:

Twenty years ago Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product virtually for free, to have computers so powerful that they will surpass human intelligence, and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish aging and death.

I don’t think Kurzweil’s “technological singularity” – a dream of scientific transcendence which echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and “the rapture of the nerds”. But nanotechnology will lead to some genuinely transformative new applications.

New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we’re used to in the developed world. We’ll learn more about intervening in our biology at the sub-cellular level, and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, like Alzheimer’s, that will increasingly afflict our population as it ages. The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another twenty five years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines, and unashamedly focused on applications rather than fundamental understanding. Given the huge resources being directed towards nanotechnology in China and its neighbours, this may be the first major technology of the modern era that is predominantly developed outside the USA and Europe.

If the technology we’ve got isn’t sustainable, doesn’t that mean we need better technology?

Friends of the Earth have published a new report called “Nanotechnology, climate and energy: over-heated promises and hot air?” (here but the website was down when I last looked). As its title suggests, it expresses scepticism about the idea that nanotechnology can make a significant contribution to making our economy more sustainable. It does make some fair points about the distance between rhetoric and reality when it comes to claims that nano-manufacturing can be intrinsically cleaner and more precise than conventional processing (the reality being, of course, that the manufacturing processes used to make nanomaterials are not currently very much different to processes to make existing materials). It also expresses scepticism about ideas such as the hydrogen economy, which I to some extent share. But I think its position betrays one fundamental and very serious error. That is the comforting, but quite wrong, belief that there is any possibility of moving our current economy to a sustainable basis with existing technology in the short term (i.e. in the next ten years).

Take, for example, solar energy. I’m extremely positive about its long term prospects. At the moment, the world uses energy at a rate of about 16 Terawatts (a TW is one thousand Gigawatts; one GW is about the scale of a medium size power station). The total energy arriving at the earth from the sun is 162,000 TW – so there is, in principle, an abundance of solar energy. But the total world amount of installed solar capacity is just over 2 GW (the nominal world installed capacity was, in 2008, 13.8 GW, which represents a real output of around 2 GW, having accounted for the lack of 24 hour sunshine and system losses. These numbers come from NREL’s 2008 Solar Technologies Market Report). This is four orders of magnitude less than the energy we need. It’s true that the solar energy industry is growing very fast – at annual rates of 40-50% at the moment. But even if this rate of increase went on for another 10 years, we would only have achieved a solar contribution of around 200 GW by 2010. Meanwhile, on even the most optimistic assumption, the IEA predicts that our total energy needs would have increased by 1400 GW in this period, so this isn’t enough even to halt the increase in our rate of burning fossil fuels, let alone reverse it. And, without falls in cost from the current values of around $5 per installed Watt, by 2020 we’d need to be spending about $2.5 trillion a year to achieve this rate of growth, at which point solar would still only be supplying around 1 % of world energy demand.

What this tells us is that though our existing technology for harvesting solar energy may be good in many ways – it’s efficient and long-lasting – it’s too expensive and in need of a step-change in the areas in which it can be produced. That’s why new solar cell technology is needed – and why those candidates which use nanotechnologies to enable large scale, roll to roll processing are potentially attractive. We know that currently these technologies aren’t ready for the mass market – their efficiencies and lifetimes aren’t good enough yet. And incremental developments of conventional silicon solar cells may yet surprise us and bring their costs down dramatically, and that would be a very good outcome too. But this is why research is needed. For perspective, look at this helpful graphic to see how the efficiencies of all solar cells have evolved with time. Naturally, the most recently invented technologies – such as the polymer solar cells – have progressed less far than the more mature technologies that are at market.

A similar story could be told about batteries. It’s clear that the use of renewables on a large scale will need large scale energy storage methods to overcome problems of intermittency, and the electrification of transport will need batteries with high specific energy (for a recent review of the requirements for plug-in hybrids see here). Currently available lithium ion batteries have a specific energy of about half a megajoule per kilogram, a fraction of the energy density of petrol (44 MJ/kg). They’re also too expensive and their lifetime is too short – they deteriorate at a rate of about 2% a year. Once again, current technology is simply not good enough, and it’s not getting better fast enough; new technology is needed, and this will almost certainly require better control of nanostructure.

Could we, alternatively, get by using less energy? Improving energy efficiency is certainly worth doing, and new technology can help here too. But substantial reductions in energy use will be associated with drops in living standards which, in rich countries, are going to be a hard sell politically. The politics of persuading poorer countries that they should forgo economic growth will be even trickier, given that, unlike the rich countries, they haven’t accumulated the benefit of centuries of economic growth fueled by cheap fossil-fuel based energy, and they don’t feel responsible for the resulting accumulation of atmospheric carbon dioxide. Above all, we mustn’t underestimate the degree to which, not just our comfort, but our very existence depends on cheap energy – notably in the high energy inputs needed to feed the world’s population. This is the hard fact that we have to face – we are existentially dependent on the fossil-fuel based technology we have now, but we know this technology isn’t sustainable and we don’t yet have viable replacements. In these circumstances we simply don’t have a choice but to try and find better, more sustainable energy technologies.

Yes, of course we have to assess the risks of these new technologies, of course we need to do the life-cycle analyses. And while Friends of the Earth may say they’re shocked (shocked!) that nanotechnology is being used by the oil industry, this seems to me to be either a rather disingenuous piece of rhetoric, or an expression of supreme naiveity about the nature of capitalism. Naturally, the oil industry will be looking at new technology such as nanotechnology to help their business; they’ve got lots of money and some pressing needs. And for all I know, there may be jungle labs in Colombia looking for applications of nanotechnology in the recreational pharmaceuticals sector right now. I can agree with FoE that it was unconvincing to suggest that there was something inherently environmental benign about nanotechnology, but it’s equally foolish to imply that, because the technology can be used in industries that you disapprove of, that makes it intrinsically bad. What’s needed instead is a realistic and hard-headed assessment of the shortcomings of current technologies, and an attempt to steer potentially helpful emerging new technologies in beneficial directions.

Feynman, Waldo and the Wickedest Man in the World

It’s been more than fifty years since Richard Feynman delivered his lecture “Plenty of Room at the Bottom”, regarded by many as the founding vision statement of nanotechnology. That foundational status has been questioned, most notably by Chris Tuomey in his article Apostolic Succession (PDF). In another line of attack, Colin Milburn, in his book Nanovision, argues against the idea that the ideas of nanotechnology emerged from Feynman’s lecture as the original products of his genius; instead, according to Milburn, Feynman articulated and developed a set of ideas that were already current in science fiction. And, as I briefly mentioned in my report from September’s SNET meeting, according to Milburn, the intellectual milieu from which these ideas emerged had some very weird aspects.

Milburn describes some of science fiction antecedents of the ideas in “Plenty of Room” in his book. Perhaps the most direct link can be traced for Feynman’s notion of remote control robot hands, which make smaller sets of hands, which can be used to be made yet smaller ones, and so on. The immediate source of this idea is Robert Heinlein’s 1942 novella “Waldo”, in which the eponymous hero devises just such an arrangement to carry out surgery on the sub-cellular level. There’s no evidence that Feynman had read “Waldo” himself, but Feynman’s friend Al Hibbs certainly had. Hibbs worked at Caltech’s Jet Propulsion Laboratory, and he had been so taken by Heinlein’s idea of robot hands as a tool for space exploration that he wrote up a patent application for it (dated 8 February 1958). Ed Regis, in his book “Nano”, tells the story, and makes the connection to Feynman, quoting Hibbs as follows: “It was in this period, December 1958 to January 1959, that I talked it over with Feynman. Our conversations went beyond my “remote manipulator” into the notion of making things smaller … I suggested a miniature surgeon robot…. He was delighted with the notion.”

“Waldo” is set in a near future, where nuclear derived energy is abundant, and people and goods fly around in vessels powered by energy beams. The protagonist, Waldo Jones, is a severely disabled mechanical genius (“Fat, ugly and hopelessly crippled” as it says on the back of my 1970 paperback edition) who lives permanently in an orbiting satellite, sustained by the technologies he’s developed to overcome his bodily weaknesses. The most effective of these technologies are the remote controlled robot arms, named “waldos” after their inventor. The plot revolves around a mysterious breakdown of the energy transmission system, which Waldo Jones solves, assisted by the sub-cellular surgery he carries out with his miniaturised waldos.

The novella is dressed up in the apparatus of hard science fiction – long didactic digressions, complete with plausible-sounding technical details and references to the most up-to-date science, creating the impression of that its predictions of future technologies are based on science. But, to my surprise, the plot revolves around, not science, but magic. The fault in the flying machines is diagnosed by a back-country witch-doctor, and involves a failure of will by the operators (itself a consequence of the amount of energy being beamed about the world). And the fault can itself be fixed by an act of will, by which energy in a parallel, shadow universe can be directed into our own world. Waldo Jones himself learns how to access the energy of this unseen world, and in this way overcomes his disabilities and fulfills his full potential as a brain surgeon, dancer and all round, truly human genius.

Heinlein’s background as a radio engineer explains where his science came from, but what was the source of this magical thinking? The answer seems to be the strange figure of Jack Parsons. Parsons was a self-taught rocket scientist, one of the founders of the Jet Propulsion Laboratory and a key figure in the early days of the USA’s rocket program (his story is told in George Pendle’s biography “Strange Angel”). But he was also deeply interested in magic, and was a devotee of the English occultist Aleister Crowley. Crowley, aka The Great Beast, was notorious for his transgressive interest in ritual magic – particularly sexual magic – and attracted the title “the wickedest man in the world” from the English newspapers in between the wars. He had founded a religion of his own, whose organisation, the Ordo Templi Orientis, promulgated his creed, summarised as “Do what thou wilt shall be the whole of the Law”. Parsons was inititated into the Hollywood branch of the OTO in 1941; in 1942 Parsons, now a leading figure in the OTO, moved the whole commune into a large house in Pasadena, where they lived according to Crowley’s transgressive law. Also in 1942, Parsons met Robert Heinlein at the Los Angeles Science Fiction Society, and the two men became good friends. Waldo was published that year.

The subsequent history of Jack Parsons was colourful, but deeply unhappy. He became close to another member of the circle of LA science fiction writers, L. Ron Hubbard, who moved into the Pasadena house in 1945 with catastrophic effects for Parsons. In 1952, Parsons died in a mysterious explosives accident in his basement. Hubbard, of course, went on to found a religion of his own, Scientology.

This is a fascinating story, but I’m not sure what it signifies, if anything. Colin Milburn wonders whether “it is tempting to see nanotech’s aura of the magical, the impossible made real, as carried through the Parsons-Heinlein-Hibbs-Feynman genealogy”. Sober scientists working in nanotechnology would argue that their work is as far away from magical thinking as one can get. But amongst those groups on the fringes of the science that cheer nanotechnology on – the singularitarians and transhumanists – I’m not sure that magic is so distant. Universal abundance through nanotechnology, universal wisdom through artificial intelligence, and immortal life through the defeat of ageing – these sound very much like the traditional aims of magic – these are parallels that Dale Carrico has repeatedly drawn attention to. And in place of Crowley’s Ordo Templi Orientis (and no doubt without some of the OTO’s more colourful practises), transhumanists have their very own Order of Cosmic Engineers, to “engineer ‘magic’ into a universe presently devoid of God(s).”

Computing with molecules

This is a pre-edited version of an essay that was first published in April 2009 issue of Nature Nanotechnology – Nature Nanotechnology 4, 207 (2009) (subscription required for full online text).

The association of nanotechnology with electronics and computers is a long and deep one, so it’s not surprising that a central part of the vision of nanotechnology has been the idea of computers whose basic elements are individual molecules. The individual transistors of conventional integrated circuits are at the nanoscale already, of course, but they’re made top-down by carving them out from layer-cakes of semiconductors, metals and insulators – what if one could make the transistors by joining together individual molecules? This idea – of molecular electronics – is an old one, which actually predates the widespread use of the term nanotechnology. As described in an excellent history of the field by Hyungsub Choi and Cyrus Mody (The Long History of Molecular Electronics, PDF) its origin can be securely dated at least as early as 1973; since then it has had a colourful history of big promises, together with waves of enthusiasm and disillusionment.

Molecular electronics, though, is not the only way of using molecules to compute, as biology shows us. In an influential 1995 review, Protein molecules as computational elements in living cells (PDF), Dennis Bray pointed out that the fundamental purpose of many proteins in cells seems to be more to process information than to effect chemical transformations or make materials. Mechanisms such as allostery permit individual protein molecules to behave as individual logic gates; one or more regulatory molecules bind to the protein, and thereby turn on or off its ability to catalyse a reaction. If the product of that reaction itself regulates the activity of another protein, one can think of the result as an operation which converts an input signal conveyed by one molecule into an output conveyed by another, and by linking together many such reactions into a network one builds a chemical “circuit” which in effect can carry out computational tasks of more or less complexity. The classical example of such a network is the one underlying the ability of bacteria to swim towards food or away from toxins. In bacterial chemotaxis, information from sensors about many different chemical species in the environment is integrated to produce the signals that control a bacterium’s motors, resulting in apparently purposeful behaviour.

The broader notion that much cellular activity can be thought of in terms of the processing of information by the complex networks involved in gene regulation and cell signalling has had a far-reaching impact in biology. The unravelling of these networks is the major concern of systems biology, while synthetic biology seeks to re-engineer them to make desired products. The analogies between electronics and systems thinking and biological systems are made very explicit in much writing about synthetic biology, with its discussion of molecular network diagrams, engineered gene circuits and interchangeable modules.

And yet, this alternative view of molecular computing has yet to make much impact in nanotechnology. Molecular logic gates have been demonstrated in a number of organic compounds, for example by the Belfast based chemist Prasanna de Silva; here ingenious molecular design can allow several input signals, represented by the presence or absence of different ions or other species, to be logically combined to produce outputs represented by optical fluorescence signals at different wavelengths. In one approach, a molecule consists of a fluorescent group is attached by a spacer unit to receptor groups; in the absence of bound species at the receptors, electron transfer from the receptor group to the fluorophore suppresses its fluorescence. Other approaches employ molecular shuttles – rotaxanes – in which physically linked but mobile molecular components move to different positions in response to changes in their chemical environment. These molecular engineering approaches are leading to sensors of increasing sophistication. But because the output is in the form of fluorescence, rather than a molecule, it is not possible to link many such logic gates into a network.

At the moment, it seems the most likely avenue for developing complex networks for information processing based on synthetic components will use nucleic acids, particularly DNA. Like other branches of the field of DNA nanotechnology, progress here is being driven by the growing ease and cheapness with which it is possible to synthesise specified sequences of DNA, together with the relative tractability of design and modelling of molecular interactions based on the base pair interaction. One demonstration from Erik Winfree’s group at Caltech uses this base pair interaction to design logic gates based on DNA molecules. These accept inputs in the form of short RNA strands, and output DNA strands according to the logical operations OR, AND or NOT. The output strands can themselves be used as inputs for further logical operations, and it is this that would make it possible in principle to develop complex information processing networks.

What should we think about using molecular computing for? The molecular electronics approach has a very definite target; to complement or replace conventional CMOS-based electronics, to ensure the continuation of Moore’s law beyond the point when physical limitations prevent any further miniaturisation of silicon-based. The inclusion of molecular electronics in the latest International Technology Roadmap for Semiconductors indicates the seriousness of this challenge, and molecular electronics and other related approaches, such as graphene-based electronics, will undoubtedly continue to be enthusiastically pursued. But these are probably not appropriate goals for molecular computing with chemical inputs and outputs. Instead, the uses of these technologies are likely to be driven by their most compelling unique selling point – the ability to interface directly with the biochemical processes of the cell. It’s been suggested that such molecular logic could be used to control the actions of a sophisticated drug device, for example. An even more powerful possibility is suggested by another paper (abstract, subscription required for full paper) from Christina Smolke (now at Stanford). In this work an RNA construct controls the in-vivo expression of a particular gene in response to this kind of molecular logic. This suggests the creation of what could be called molecular cyborgs – the result of a direct merging of synthetic molecular logic with the cell’s own control systems.

Society for the study of nanoscience and emerging technologies

Last week I spent a couple of days in Darmstadt, at the second meeting of the Society for the Study of Nanoscience and Emerging Technologies (S.NET). This is a relatively informal group of scholars in the field of Science and Technology Studies from Europe, the USA and some other countries like Brazil and India, coming together from disciplines like philosophy, political science, law, innovation studies and sociology.

Arie Rip (president of the society, and to many the doyen of European science and technology studies) kicked things off with the assertion that nanotechnology is, above all, a socio-political project, and the warning that this object of study was in the process of disappearing (a theme that recurred throughout the conference). Not to be worried by this prospect, Arie observed that their society could keep its acronym and rename itself the Society for the Study of Newly Emerging Technologies.

The first plenary lecture was from the French philosopher Bernard Stiegler, on Knowledge, Industry and Distrust at the Time of Hyperminiaturisation. I have to say I found this hard going; the presentation was dense with technical terms and delivered by reading a prepared text. But I’m wiser about it now than I was, thanks to a very clear and patient explanation from Colin Milburn over dinner that evening, who filled us in with the necessary background about Derrida’s intepretation of Plato’s pharmakon, and Simondon’s notion of disindividuation.

One highlight for me was a talk by Michael Bennett about changes in the intellectual property regime in the USA during the 1980’s and 1990’s. He made a really convincing case that the growth of nanotechnology went in parallel with a series of legal and administrative changes that amounted to a substantial intensification of the intellectual property regime in the USA. While some people think that developments in law struggle to keep up with science and technology, he argued instead that law bookends the development of technoscience, both shaping the emergence of the science and dominating the way it is applied. This growing influence, though, doesn’t help innovation. Recent trends, such as the tendency of research universities to patent early with very wide claims, and to seek exclusive licenses, aren’t helpful; we’re seeing the creation of “patent thickets”, such as the one that surrounds carbon nanotubes, which substantially add to the cost and increase uncertainty for those trying to commercialise technologies in this area. And there is evidence of an “anti-commons” effect, where other scientists are inhibited from working on systems when patents have been issued.

A round-table discussion on the influence of Feynman’s lecture “Plenty of Room at the Bottom” on the emergence of nanotechnology as a field produced some suprises too. I’m already familiar with Chris Tuomey’s careful demonstration that Plenty of Room’s status as the foundation of nanotechnology was largely granted retrospectively (see, for example, his article Apostolic Succession, PDF); Cyrus Mody‘s account of the influence it had on the then emerging field of microelectronics adds some shade to this picture. Colin Milburn made some comments that put Feynman’s lecture into the cultural context of its time; particularly in the debt it owed to science fiction stories like Robert Heinlein’s “Waldo”. And, to my great surprise, he reminded us just how weird the milieu of post-war Pasadena was; the very odd figure of Jack Parsons helping to create the Jet Propulsion Laboratory while at the same time conducting a programme of magic inspired by Aleister Crowley and involving a young L. Ron Hubbard. At this point I felt I’d stumbled out of an interesting discussion of a by-way of the history of science into the plot of an unfinished Thomas Pynchon novel.

The philosopher Andrew Light talked about how deep disagreements and culture wars arise, and the distinction between intrinsic and extrinsic objections to new technologies. This was an interesting analysis, though I didn’t entirely agree with his prescriptions, and a number of other participants were showing some some unease at the idea that the role of philosophers is to create a positive environment for innovation. My own talk was a bit of a retrospective, with the title “What has nanotechnology taught us about contemporary technoscience?” The organisers will be trying to persuade me to write this up for the proceedings volume, so I’ll say no more about this for the moment.

On pure science, applied science, and technology

It’s conventional wisdom that science is very different from technology, and that it makes sense to distinguish between pure science and applied science. Largely as a result of thinking about nanotechnology (as I discussed a few years ago here and here), I’m less confident any more that there’s such a clean break between science and technology, or, for that matter, pure and applied science.

Historians of science tell us that the origin of the distinction goes back to the ancient Greeks, who distinguished between episteme, which is probably best translated as natural philosophy, and techne, translated as craft. Our word technology derives from techne, but careful scholars remind us that technology actually refers to writing about craft, rather than doing the craft itself. They would prefer to call the actual business of making machines and gadgets technique (in the same way as the Germans call it technik), rather than technology. Of course, for a long time nobody wrote about technique at all, so there was in this literal sense no technology. Craft skills were regarded as secrets, to be handed down in person from master to apprentice, who were from a lower social class than the literate philosophers considering more weighty questions about the nature of reality.

The sixteenth century saw some light being thrown on the mysteries of technique with books (often beautifully illustrated) being published about topic like machines and metal mining. But one could argue that the biggest change came with the development of what was called then experimental philosophy, which we see now as being the beginnings of modern science. The experimental philosophers certainly had to engage with craftsman and instrument makers to do their experiments, but what was perhaps more important was the need to commit the experimental details to writing so that their counterparts and correspondents elsewhere in the country or elsewhere in Europe could reliably replicate the experiments. Complex pieces of scientific apparatus, like Robert Boyle’s airpump, certainly were some of the most advanced (and expensive) pieces of technology of the day. And, conversely, it’s no accident that James Watt, who more than anyone else made the industrial revolution possible with his improved steam engine, learned his engineering as an instrument maker at the University of Glasgow.

But surely there’s a difference between making a piece of experimental apparatus to help unravel the ultimate nature of reality, and making an engine to pump a mine out? In this view, the aim of science is to understand the ultimate fundamental nature of reality, while technology seeks merely to alter the world in some way, with its success being judged simply by whether it does its intended job. In actuality, the aspect of science as natural philosophy, with its claims to deep understanding of reality, has always coexisted with a much more instrumental type of science whose success is judged by the power over nature it gives us (Peter Dear’s book The Intelligibility of Nature is a fascinating reflection on the history of this dual character of science). Even the keenest defenders of science’s claim to make reliable truth-claims about the ultimate nature of reality – often resort to entirely instrumental arguments – “if you’re so sceptical about science”, they’ll ask a relativist or social constructionist, “why do you fly in airplanes or use antibiotics?”

It’s certainly true that different branches of science are, to a different degree, applicable to practical problems. But which science is an applied science and which is a pure science depends as much on what problems society, at a particular time and in a particular place, needs solving, as on the character of the science itself. In the sixteenth and seventeenth centuries astronomy was a strategic subject of huge importance to the growing naval powers of the time, and was one of the first recipients of large scale state funding. The late nineteenth and early twentieth centuries were the heyday of chemistry, with new discoveries in explosives, dyes and fertilizers making fortunes and transforming the world only a few years after their discoveries in the laboratory. A contrarian might even be tempted to say “a pure science is an applied science that has outlived its usefulness”.

Another way of seeing the problems of a supposed divide between pure science, applied science and technology is to ask what it is that scientists actually do in their working lives. A scientist building a detector for CERN or writing an image analysis program for some radio astronomy data may be doing the purest of pure science in terms of their goals – understanding particle physics or the distant universe – but what they’re actually doing day to day will look very similar indeed to their applied scientist counterparts designing medical imaging hardware or software for interpreting CCTV footage for the police. Of course, this is the origin of the argument that we should support pure science for the spin-offs it produces (such as the World Wide Web, as the particle physicists continually remind us). A counter-argument would say, why not simply get these scientists to work on medical imaging (say) in the first place, rather than trying to look for practical applications for the technologies they develop in support of their “pure” science? Possible answers to this might point to the fact that the brightest people are motivated to solve deep problems in a way that might not apply to more immediately practical issues, or that our economic system doesn’t provide reliable returns for the most advanced technology developed on a speculative basis.

If it was ever possible to think that pure science could exist as a separate province from the grubby world of application, like Hesse’s “The Glass Bead Game”, that illusion was shattered in the second world war. The purest of physicists delivered radar and the fission bomb, and in the cold war we emerged into it seemed that the final destiny of the world was going to be decided by the atomic physicists. In the west, the implications of this for science policy was set out by Vannevar Bush. Bush, an engineer and perhaps the pre-eminent science administrator of the war, set out the framework for government funding of science in the USA in his report “Science: the endless frontier”.

Bush’s report emphasised, not “pure” research, but “basic” research. The distinction between basic research and applied research was not to be understood in terms of whether it was useful or not, but in terms of the motivations of the people doing it. “Basic research is performed without thought of practical ends” – but those practical ends do, nonetheless, follow (albeit unpredictably), and it’s the job of applied research to fill in the gaps. It had in the past been possible for a country to make technological progress without generating its own basic science (as the USA did in the 19th century) but, Bush asserted, the modern situation was different, and “A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade”.

Bush thus left us with three ideas that form the core of the postwar consensus on science policy. The first was that basic research should be carried out in isolation from thoughts of potential use – that it should result from ” the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown”. The second was that, even though the scientists who produced this basic knowledge weren’t motivated by practical applications, these applications would follow, by a process in which potential applications were picked out and developed by applied scientists, and then converted into new products and processes by engineers and technologists. This one-way flow of ideas from science into application is what innovation theorists call the linear model of innovation. Bush’s third assertion was that a country that invested in basic science would recoup that investment through capturing the rewards from new technologies.

All three of these assertions have subsequently been extensively criticised, though the basic picture has a persistent hold on our thinking about science. Perhaps the most influential critique, from the science policy point of view, came in a book by Donald Stokes called Pasteur’s quadrant. Stokes argued from history that the separation of basic research from thoughts of potential use often didn’t happen; his key example was Louis Pasteur, who created a new field of microbiology in his quest to understand the spoilage of milk and the fermentation of wine. Rather than thinking about a linear continuum between pure and applied research, he thought in terms of two dimensions – the degree to which research was motivated by a quest for fundamental understanding, and the degree to which it was motivated by applications. Some research was driven solely by the quest for understanding, typified by Bohr, while an engineer like Edison typified a search for practical results untainted by any deeper curiosity. But, the example of Pasteur showed us that the two motivations could coexist. He suggested that research in this “Pasteur’s quadrant” – use-inspired basic research – should be a priority for public support.

Where are we now? The idea of Pasteur’s quadrant underlies the idea of “Grand Challenges” inspired by societal goals as an organising principle for publicly supported science. From innovation theory and science and technology studies come new terms and concepts, like technoscience, and Mode 2 knowledge production. One might imagine that nobody believes in the linear model anymore; it’s widely accepted that technology drives science as often as science drives technology. As David Willetts, the UK’s Science Minister, put it in a speech in July this year, “A very important stimulus for scientific advance is, quite simply, technology. We talk of scientific discovery enabling technical advance, but the process is much more inter-dependent than that.” But the linear model is still deeply ingrained in the way policy makers talk – in phrases like “technology readiness levels”, “pull-though to application”. From a more fundamental point of view, though, there is still a real difference between finding evidence to support a hypothesis and demonstrating that a gadget works. Intervening in nature is a different goal to understanding nature, even though the processes by which we achieve these goals are very much mixed up.

Energy, carbon, money – floating rates of exchange

When one starts reading about the future of the world’s energy economy, one needs to get used to making conversions amongst a zoo of energy units – exajoules, millions of tons of oil equivalent, quadrillions of british thermal units and the rest. But these conversions are trivial in comparison to a couple of other rates of exchange – the relationship between energy and carbon emissions (using this term as a shorthand for the effect of energy use on the global climate), and the conversion between energy and money.

On the face of it, it’s easy to see the link between emissions and energy. You burn a tonne of coal, you get 29 GJ of energy out and you emit 2.6 tonnes of carbon dioxide. But, if we step back to the level of a national or global economy, the emissions per unit of energy used depend on the form in which the energy is used (directly burning natural gas vs using electricity, for example) and, for the case of electricity, on the mix of generation being used. But if we want an accurate picture of the impact of our energy use on climate change, we need to look at more than just carbon dioxide emissions. CO2 is not the only greenhouse gas; methane, for example, despite being emitted in much smaller quantities than CO2, is still a significant contributor to climate change as it is a considerably more potent greenhouse gas than CO2. So if you’re considering the total contribution to global warming of electricity derived from a gas power station you need to account, not just for the CO2 produced by direct burning, but of the effect of any methane emitted from leaks in the pipes getting to the power station. Likewise, the effect on climate of the high altitude emissions from aircraft is substantially greater than that from the carbon dioxide alone, for example due to the production of high altitude ozone from NOx emissions. All of these factors can be wrapped up by expressing the effect of emissions on the climate through a measure of “mass of carbon dioxide equivalent”. It’s important to take these additional factors into account, or you end up significantly underestimating the climate impact of much energy use, but this accounting embodies more theory and more assumptions.

For a high accessible and readable account of the complexities of assigning carbon footprints to all sorts of goods and activities, I recommend Mike Berners-Lee’s new book How Bad Are Bananas?: The carbon footprint of everything. This has some interesting conclusions – his insistence on full accounting leads to surprisingly high carbon footprints for rice and cheese, for example (as the title hints, he recommends you eat more bananas). But carbon accounting is in its infancy; what’s arguably most important now is money.

At first sight, the conversion between energy and money is completely straightforward; we have well-functioning markets for common energy carriers like oil and gas, and everyone’s electricity bill makes it clear how much we’re paying individually. The problem is that it isn’t enough to know what the cost of energy is now; if you’re deciding whether to build a nuclear power station or to install photovoltaic panels on your roof, to make a rational economic decision you need to know what the price of energy is going to be over a twenty to thirty year timescale, at least (the oldest running nuclear power reactor in the UK was opened in 1968).

The record of forecasting energy prices and demand is frankly dismal. Vaclav Smil devotes a whole chapter of his book Energy at the Crossroads: Global Perspectives and Uncertainties to this problem – the chapter is called, simply, “Against Forecasting”. Here are a few graphs of my own to make the point – these are taken from the US Energy Information Administration‘s predictions of future oil prices.

In 2000 the USA’s Energy Information Agency produced this forecast for oil prices (from the International Energy Outlook 2000):

Historical oil prices up to 2000 in 2008 US dollars, with high, low and reference predictions made by the EIA in 2000

After a decade of relatively stable oil prices (solid black line), the EIA has relatively tight bounds between its high (blue line), low (red line) and reference (green line) predictions. Let’s see how this compared with what happened as the decade unfolded:

High, low and reference predictions for oil prices made by the EIA in 2000, compared with the actual outcome from 2000-2010

The EIA, having been mugged by reality in its 2000 forecasts, seems to have learnt from its experience, if the range of the predictions made in 2010 is anything to go by:

2000 and 2010 oIl price predictions
Successive predictions for future oil prices made by the USA's EIA in 2000 and 2010, compared to the actual outcome up to 2010

This forecast may be more prudent than the 2000 forecast, but with a variation of nearly of factor of four between high and low scenarios, it’s also pretty much completely useless. Conventional wisdom in recent years argues that we should arrange our energy needs through a deregulated market. It’s difficult to see how this can work when the information on the timescale needed to make sensible investment decisions is so poor.

What does it mean to be a responsible nanoscientist?

This is the pre-edited version of an article first published in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be found here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, the European Commission recommended a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another recently issued code the UK government’s Universal Ethical Code for Scientists (PDF) – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.

David Willetts on Science and Society

The UK’s Minister for Science and Higher Education, David Willetts, made his first official speech about science at the RI on 9 July 2010. What everyone is desperate to know is how big a cut the science budget will take. Willetts can’t answer this yet, but the background position isn’t good. We know that the budget of his department – Business, Innovation and Skills – will be cut by somewhere between 25%-33%. Science accounts for about 15% of this budget, with Universities accounting for another 29% (not counting the cost of student loans and grants, which accounts for another 27%). So, there’s not going to be a lot of room to protect spending on science and on research in Universities.

Having said this, this is a very interesting speech, in that Willetts takes some very clear positions on a number of issues related to science and innovation and their relationship to society, some of which are rather different from views in government before. I met Willetts earlier in the year, and then he said a couple of things then that struck me. He said that there was nothing in science policy that couldn’t be illuminated by looking at history. He mentioned in particular “The Shock of the Old”, by David Edgerton (which I’ve previously discussed here), and I noticed that at the RS meeting after the election he referred very approvingly to David Landes’s book “The Wealth and Poverty of Nations”. More personally, he referred with pride to his own family origins as Birmingham craftsmen, and he clearly knows the story of the Lunar Society well. His own academic background is as a social scientist, so it would be to be expected that he’d have some well-developed views about science and society. Here’s how I gloss the relevant parts of his speech.

More broadly, as society becomes more diverse and cultural traditions increasingly fractured, I see the scientific way of thinking – empiricism – becoming more and more important for binding us together. Increasingly, we have to abide by John Rawls’s standard for public reason – justifying a particular position by arguments that people from different moral or political backgrounds can accept. And coalition, I believe, is good for government and for science, given the premium now attached to reason and evidence.

The American political philosopher John Rawls was very concerned about how, in a pluralistic society, one could agree on a common set of moral norms. He rejected the idea that you could construct morality on entirely scientific grounds, as consequentialist ethical systems like utilitarianism try to, instead looking for a principles based morality; but he recognised that this was problematic in a society where Catholics, Methodists, Atheists and Muslims all had their different sets of principles. Hence the idea of trying to find moral principles that everyone in society can agree on, even though the grounds on which they approve of these principles may differ from group to group. In a coalition uniting parties including people as different as Evan Harris and Philippa Stroud one can see why Willetts might want to call in Rawls for help.

The connection to science is an interesting one, that draws on a particular reading of the development of the empirical tradition. According, for example, to Schaffer and Shapin (in their book “Leviathan and the Air Pump”) one of the main aims of the Royal Society in its early days was to develop a way of talking about philosophy – based on experiment and empiricism, rather than doctrine – that didn’t evoke the clashing religious ideologies that had been the cause of the bloody religious wars of the seventeenth century. According to this view (championed by Robert Boyle), in experimental philosophy one should refrain entirely from talking about contentious issues like religion, restricting oneself entirely to discussion of what one measures in experiments that are open to be observed and reproduced by anyone.

You might say that science is doing so well in the public sphere that the greatest risks it faces are complacency and arrogance. Crude reductionism puts people off.

I wonder if he’s thinking of the current breed of scientific atheists like Richard Dawkins?

Scientists can morph from admired public luminaries into public enemies, as debates over nuclear power and GM made clear. And yet I remain optimistic here too. The UK Research Councils had the foresight to hold a public dialogue about ramifications of synthetic biology ahead of Craig Venter developing the first cell controlled by synthetic DNA. This dialogue showed that there is conditional public support for synthetic biology. There is great enthusiasm for the possibilities associated with this field, but also fears about controlling it and the potential for misuse; there are concerns about impacts on health and the environment. We would do well to remember this comment from a participant: “Why do they want to do it? … Is it because they will be the first person to do it? Is it because they just can’t wait? What are they going to gain from it? … [T]he fact that you can take something that’s natural and produce fuel, great – but what is the bad side of it? What else is it going to do?” Synthetic biology must not go the way of GM. It must retain public trust. That means understanding that fellow citizens have their worries and concerns which cannot just be dismissed.

This is a significant passage which seems to accept two important features of some current thinking about public engagement with science. Firstly, that it should be “upstream” – addressing areas of science, like synthetic biology, for which concrete applications have yet to emerge, and indeed in advance of signficant scientific breakthroughs like Venter’s “synthetic cell”. Secondly, it accepts that the engagement should be two-way, that the concerns of the public may well be legitimate and should be taken seriously, and that these concerns go beyond simple calculations of risk.

The other significant aspect of Willetts’s speech was a wholesale rejection of the “linear model” of science and innovation, but this needs another post to discuss in detail.