Five years on from the Royal Society report

Five years ago this summer, the UK’s Royal Society and the Royal Academy of Engineering issued a report on Nanoscience and nanotechnologies: opportunities and uncertainties. The report had been commissioned by the Government, and has been widely praised and widely cited. Five years on, it’s worth asking the question what difference has it made, and what is left to be done. The Responsible Nanoforum has collected a fascinating collection of answers to these questions – A beacon or just a landmark?. Reactions come from scientists, people in industry, representatives of NGOs, and the report is introduced by the Woodrow Wilson centre’s Andrew Maynard. His piece is also to be found on his blog. Here’s what I wrote.

The Royal Society/Royal Academy of Engineering report was important in a number of respects. It signaled a new openness from the science community, a new willingness by scientists and engineers to engage more widely with society. This was reflected in the composition of the group itself, with its inclusion of representatives from philosophy, social science and NGOs in addition to distinguished scientists, as well as in its recommendations. It accepted the growing argument that the place for public engagement was “upstream” – ahead of any major impacts on society; in the words of the report , “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.” Among its specific recommendations, its highlighting of potential issues of the toxicity and environmental impact of some classes of free, engineered nano-particles has shaped much of the debate around nanotechnologies in the subsequent five years.

The impact in the UK has been substantial. We have seen a serious effort to engage the public in a genuinely open way; the recent EPSRC public dialogue on nanotechnology in healthcare gives a demonstration that these ideas have gone beyond public relations to begin to make a real difference to the direction of science funding. The research that the report called for in nanoparticle toxicity and eco-toxicity has been slower to get going. The opportunity to make a relatively small, focused investment in this area, as recommended by the report, was not taken and this is to be regretted. Despite the slow start caused by this failure to act decisively, however, there is now in the UK a useful portfolio of research in toxicology and ecotoxicology.

One of the consequences of the late start in dealing with the nanoparticle toxicity issue has been that this has dominated the public dialogue about nanotechnology, crowding out discussion of the potential far-reaching consequences of these technologies in the longer term. We now need to learn the lessons of the Royal Society report and apply them to the development of the new generations of nanotechnology now being developed in laboratories around the world, as well as to other, potentially transformative, technologies. Synthetic biology, which has strong overlaps with bionanotechnology, is now receiving similar scrutiny, and we can expect the debates surrounding subjects such as neurotechnology, pervasive information technology, and geoengineering to grow in intensity. These discussions may be fraught and controversial, but the example of the Royal Society nanotechnology report, as a model for how to set the scene for a constructive debate about controversial science issues, will prove enduring.

Soft machines and robots

Robots is a website featuring regular podcasts about various aspects of robotics; currently it’s featuring a podcast of an interview with me by Sabine Hauert, from EPFL’s Laboratory of Intelligent Systems. This was prompted by my talk at the IEEE Congress on Evolutionary Computing, which essentially was about how to build a nanobot. Regular readers of this blog will not be surprised to hear that a strong theme of both interview and talk is the need to take inspiration from biology when designing “soft machines”, which need to be optimised for the special, and to us very unfamiliar, physics of the nanoworld, rather than using inappropriate design principles derived from macroscopic engineering. For more on this, the interested reader might like to take a look at my earlier essay, “Right and wrong lessons from biology”.

Food nanotechnology – their Lordships deliberate

Today I found myself once again in Westminster, giving evidence to a House of Lords Select Committee, which is currently carrying out an inquiry into the use of nanotechnology in food. Readers not familiar with the intricacies of the British constitution need to know that the House of Lords is one of the branches of Parliament, the UK legislature, with powers to revise and scrutinise legislation, and through its select committees, hold the executive to account. Originally its membership was drawn from the hereditary peerage, with a few bishops thrown in; recently as part of a slightly ramshackle program of constitutional reform the influence of the hereditaries has been much reduced, with the majority of the chamber being made up of members appointed for life by the government. These are drawn from former politicians and others prominent in public life. Whatever the shortcomings of this system from the democratic point of view, it does mean that the membership includes some very well informed people. This inquiry, for example, is being chaired by Lord Krebs, a very distinguished scientist who previously chaired the Food Standards Agency.

All the evidence submitted to the committee is publicly available on their website; this includes submissions from NGOs, Industry Organisations, scientific organisations and individual scientists. There’s a lot of material there, but together it’s actually a pretty good overview of all sides of the debate. I’m looking forward to seeing their Lordships’ final report.

Environmentally beneficial nanotechnology

Today I’ve been at Parliament in London, at an event sponsored by the Parliamentary Office of Science and Technology to launch the second phase of the Environmental Nanoscience Initiative. This is a joint UK-USA research program led by the UK’s Natural Environment Research Agency and the USA’s Environmental Protection Agency. This is a very welcome initiative to give some more focus to existing efforts to quantify possible detrimental effects of engineered nanoparticles on the environment. It’s important to put more effort into filling gaps in our knowledge about what happens to nanoparticles when they enter the environment and start entering ecosystems, but equally it’s important not to forget that a major motivation for doing research in nanotechnology in the first place is for its potential to ameliorate the very serious environmental problems the world now faces. So I was very pleased to be asked to give a talk at the event to highlight some of the positive ways that nanotechnology could benefit the environment. Here are some of the key points I tried to make.

Firstly, we should ask why we need new technology at all. There is a view (eloquently expressed, for example, in Bill McKibben’s book “Enough”) that our lives in the West are comfortable enough, the technology we have now is enough to satisfy our needs without any more gadgets, and that the new technologies coming along – such as biotechnology, nanotechnology, robotics and neuro-technology – are so powerful and have such potential to cause harm that we should consciously relinquish them.

This argument is seductive to some, but it’s profoundly wrong. Currently the world supports more than six billion people; by the middle of the century that number may be starting to plateau out, perhaps between 8 and 10 billion people. It is technology that allows the planet to support these numbers; to give just one instance, our food supplies depend on the Haber-Bosch process, which uses fossil fuel energy to fix nitrogen to use in artificial fertilizers. It’s estimated that without Haber-Bosch nitrogen, more than half the world’s population would starve, even if everyone adopted a minimal, vegetarian diet. So we are existentially dependent on technology – but the technology we depend on isn’t sustainable. To escape from this bind, we must develop new, and more sustainable, technologies.

Energy is at the heart of all these issues; the availability of cheap and concentrated energy is what underlies our prosperity, and as the world’s population grows and becomes more prosperous, demand for energy will grow. It is important to appreciate the scale of these needs, which are measured in 10’s of terawatts (remember that a terawatt is a thousand gigawatts, with a gigawatt being the scale of a large coal-fired or nuclear power station). Currently the sources of this energy are dominated by fossil fuels, and it is the relentless growth of fossil fuel energy since the late 18th century that has directly led to the rise in atmospheric carbon dioxide concentrations. This rise, together with other greenhouse gases, is leading to climate change, which in turn will directly lead to other problems, such as pressure on clean water supplies and growing insecurity of food supplies. It is this background which sets the agenda for the new technologies we need.

At the moment we don’t know for certain which of the many new technologies being developed to address these problems will work, either technically or socio-economically, so we need to pursue many different avenues, rather than imagining that some single solution will deliver us. Nanotechnology is at the heart of many of these potential solutions, in the broad areas of sustainable energy production, storage and distribution, in energy conservation, clean water, and environmental remediation. Let me focus on a couple of examples.

It’s well known that the energy we use is a small fraction of the total amount of energy arriving on the earth from the sun; in principle, solar energy could provide for all our energy needs. The problems are ones of cost and scale. Even in cloudy Britain, if we could cover every roof with solar cells we’d end up with a significant fraction of the 42.5 GW which represents the average rate of electricity use in the UK. We don’t do this, firstly because it would be too expensive, and secondly because the total world output of solar cells, at about 2 GW a year, is a couple of orders of magnitude too small. A variety of nanotechnology enabled potential solutions exist; for example plastic solar cells offer the possibility of using ultra-cheap, large area processing technologies to make solar cells on a very large scale. This is the area supported by EPSRC’s first nanotechnology grand challenge.

It’s important to recognise, though, that all these technologies still have major technical barriers to overcome; they are not going to come to market tomorrow. In the meantime, the continued large scale use of fossil fuels looks inevitable, so the idea of the need to mitigate their impact by carbon capture and storage is becoming increasingly compelling to politicians and policy-makers. This technology is do-able today, but the costs are frightening. Carbon capture and storage increases the price of coal-derived electricity by between 43% and 91%; this is a pure overhead. Nanotechnologies, in the form of new membranes and sorbents could reduce this. Another contribution would be finding a use for carbon dioxide, perhaps using photocatalytic reduction to convert water and CO2 into hydrocarbons and methanol, which could be used as transport fuels or chemical feedstocks. Carbon capture and utilization is the general area of the 3rd nanotechnology grand challenge, whose call for proposals is open now.

How can we make sure that our proposed innovations are responsible? The idea of the “precautionary principle” is one that is often invoked in discussions of nanotechnology, but there are aspects of this notion which make me very uncomfortable. Certainly, we can all agree that we don’t want to implement “solutions” that bring there own, worse, problems. The potential impacts of any new technology are necessarily uncertain. But on the other hand, we know that there are near-certain negative consequences of failing to act. Not to actively seek new technologies is itself a decision that has impacts and consequences of its own, and in the situation we are now in these consequences are likely to be very bad ones.

Responsible innovation, then, means that we must speed up research to fill the knowledge gaps and reduce uncertainty; this is the role of the Environmental Nanotechnology Initiative. We need to direct our search for new technologies in areas of societal need, where public support is assured by a broad consensus about the desirability of the goals. This means increasing our efforts in the area of public engagement, and ensuring a direct connection between that public engagement and decisions about research priorities. We need to recognise that there will always be uncertainty about the actual impacts of new technologies, but we should do our best to choose directions that we won’t regret, even if things don’t turn out the way we first imagined.

To sum up, nanotechnologies, responsibly implemented, are part of the solution for our environmental difficulties.

Moving on

For the last two years, I’ve been the Senior Strategic Advisor for Nanotechnology for the UK’s Engineering and Physical Science Research Council (EPSRC), the government agency that has the lead responsibility for funding nanotechnology in the UK. I’m now stepping down from this position to return to a new, full-time role at the University of Sheffield; EPSRC is currently in the process of appointing my successor.

In these two years, a substantial part of a new strategy for nanotechnology in the UK has been implemented. We’ve seen new, Grand Challenge programmes targeting nanotechnology for harvesting solar energy, and nanotechnology for medicine and healthcare, with a third programme looking for new ways of using nanotechnology to capture and utilise carbon dioxide shortly to be launched. At the more speculative end of nanotechnology, the “Software Control of Matter” programme received supplementary funding. Some excellent individual scientists have been supported through personal fellowships, and looking to the future, the three new Doctoral Training Centres in nanotechnology will produce, over the next five years, up to 150 additional PhDs in nanotechnology over and above EPSRC’s existing substantial support for graduate students. After a slow response to the 2004 Royal Society report on nanotechnology, I think we now find ourselves in a somewhat more defensible position with respect to funding of nano- toxicology and ecotoxicology studies, with some useful projects in these areas being funded by the Medical Research Council and the Natural Environment Research Council respectively, and a joint programme with the USA’s Environmental Protection Agency about to be launched. With the public engagement exercise that was run in conjunction with the Grand Challenge on nanotechnology in medicine and healthcare, I think EPSRC has gone substantially further than any other funding agency in opening up decision making about nanotechnology funding. I’ve found this experience to be fascinating and rewarding; my colleagues in the EPSRC nanotechnology team, led by John Wand, have been a pleasure to work with. I’ve also had a huge amount of encouragement and support from many scientists from across the UK academic community.

In the process, I’ve learned a great deal; nanotechnology of course takes in physics, chemistry, and biology, as well as elements from engineering and medicine. I’ve also come into contact with philosophers and sociologists, as well as artists and designers, from all of whom I’ve learnt new insights. This education will stand me in good stead in my new role at Sheffield – as the Pro-Vice-Chancellor for Research and Innovation I’ll be responsible for the health of research right across the University.

Accelerating evolution in real and virtual worlds

Earlier this week I was in Trondheim, Norway, for the IEEE Congress on Evolutionary Computing. Evolutionary computing, as its name suggests, refers to a group of approaches to computer programming that draws inspiration from the natural processes of Darwinian evolution, hoping to capitalise on the enormous power of evolution to find good solutions to complex problems from a very large range of possibilities. How, for example, might one program a robot to carry out a variety of tasks in a changing and unpredictable environment? Rather than an attempting to anticipate all the possible scenarios that your robot might encounter, and then writing control software that specified appropriate behaviours for all these possibilities, one could use evolution to select a robot controller that worked best for your chosen task in a variety of environments.

Evolution may be very effective, but in its natural incarnation it’s also very slow. One way of speeding things up is to operate in a virtual world. I saw a number of talks in which people were using simulations of robots to do the evolution; something like a computer game environment is used to simulate a robot doing a simple task like picking up an object or recognising a shape, with success or failure being used as input in a fitness function, through which the robot controller is allowed to evolve.

Of course, you could just use a real computer game. Simon Lucas, from Essex University, explained to me why classic computer games – his favourite is Ms Pac-Man – offer really challenging exercises in developing software agents. It’s sobering to realise that, while computers can beat a chess grand master, humans still have a big edge on computers in arcade games. The human high-score for Ms Pac-Man is 921,360; in a competition in the 2008 IEEE CEC meeting the winning bot achieved 15,970. Unfortunately I had to leave Trondheim before the results of the 2009 competition were announced, so I don’t know whether this year produced a big breakthrough in this central challenge to computational intelligence.

One talk at the meeting was very definitely rooted in the real, rather than virtual, world – this came from Harris Wang, a graduate student in the group of Harvard Medical School’s George Church. This was a really excellent overview of the potential of synthetic biology. At the core of the talk was a report of a recent piece of work that is due to appear in Nature shortly. This described the re-engineering of an micro-organism to increase its production of the molecule lycopene, the dye that makes tomatoes red (and probably confers significant health benefits, the basis for the seemingly unlikely claim that tomato ketchup is good for you). Notwithstanding the rhetoric of precision and engineering design that often accompanies synthetic biology, what made this project successful was the ability to generate a great deal of genetic diversity and then very rapidly screen these variants to identify the desired changes. To achieve a 500% increase in lycopene production, they needed to make up to 24 simultaneous genetic modifications, knocking out genes involved in competing processes and modifying the regulation of other genes. This produced a space of about 15 billion possible combinatorial variations, from which they screened 100,000 distinct new cell types to find their winner. This certainly qualifies as real-world accelerated evolution.

How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”

Are electric cars the solution?

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

How cells decide

One of the most important recent conceptual advances in biology, in my opinion, is the realization that much of the business carried out by the nanoscale machinery of the cell is as much about processing information as processing matter. Dennis Bray pointed out, in an important review article (8.4 MB PDF) published in Nature in 1995, that mechanisms such as allostery, by which the catalytic activity of an enzyme can be switched on and off by the binding of another molecule, mean that proteins can form the components of logic gates, which themselves can be linked together to form biochemical circuits. These information processing networks can take information about the environment from sensors at the cell surface, compute an appropriate action, and modify the cell’s behaviour in response. My eye was recently caught by a paper from 2008 which illustrates rather nicely how it is that the information processing capacity of a single cell can be quite significant.

The paper – Emergent decision-making in biological signal transduction networks (abstract, subscription required for full article in PNAS), comes from Tomáš Helikar, John Konvalina, Jack Heidel, and Jim A. Rogers at the University of Nebraska. What these authors have done is construct a large scale, realistic model of a cell signalling network in a generic eukaryotic cell. To do this, they’ve mined the literature for data on 130 different network nodes. Each node represents a protein; in a crucial simplification they reduce the complexities of the biochemistry to simple Boolean logic – the node is either on or off, depending on whether the protein is active or not, and for each node there is a truth table expressing the interactions of that node with other proteins. For some more complicated cases, a single protein may be represented by more than one node, expressing the fact that there may be a number of different modified states.

This model of the cell takes in information from the outside world; sensors at the cell membrane measure the external concentration of growth factors, extracellular matrix proteins, and calcium levels. This is the input to the cell’s information processing system. The outputs of the systems are essentially decisions by the cell about what to do in response to its environment. The key result of the simulations is that the network can take a wide variety of input signals, often including random noise, and for each combination of inputs produce one of a small number of biologically appropriate responses – as the authors write, “this nonfuzzy partitioning of a space of random, noisy, chaotic inputs into a small number of equivalence classes is a hallmark of a pattern recognition machine and is strong evidence that signal transduction networks are decision-making systems that process information obtained at the membrane rather than simply passing unmodified signals downstream.”