Archive for May, 2009

Moving on

Friday, May 29th, 2009

For the last two years, I’ve been the Senior Strategic Advisor for Nanotechnology for the UK’s Engineering and Physical Science Research Council (EPSRC), the government agency that has the lead responsibility for funding nanotechnology in the UK. I’m now stepping down from this position to return to a new, full-time role at the University of Sheffield; EPSRC is currently in the process of appointing my successor.

In these two years, a substantial part of a new strategy for nanotechnology in the UK has been implemented. We’ve seen new, Grand Challenge programmes targeting nanotechnology for harvesting solar energy, and nanotechnology for medicine and healthcare, with a third programme looking for new ways of using nanotechnology to capture and utilise carbon dioxide shortly to be launched. At the more speculative end of nanotechnology, the “Software Control of Matter” programme received supplementary funding. Some excellent individual scientists have been supported through personal fellowships, and looking to the future, the three new Doctoral Training Centres in nanotechnology will produce, over the next five years, up to 150 additional PhDs in nanotechnology over and above EPSRC’s existing substantial support for graduate students. After a slow response to the 2004 Royal Society report on nanotechnology, I think we now find ourselves in a somewhat more defensible position with respect to funding of nano- toxicology and ecotoxicology studies, with some useful projects in these areas being funded by the Medical Research Council and the Natural Environment Research Council respectively, and a joint programme with the USA’s Environmental Protection Agency about to be launched. With the public engagement exercise that was run in conjunction with the Grand Challenge on nanotechnology in medicine and healthcare, I think EPSRC has gone substantially further than any other funding agency in opening up decision making about nanotechnology funding. I’ve found this experience to be fascinating and rewarding; my colleagues in the EPSRC nanotechnology team, led by John Wand, have been a pleasure to work with. I’ve also had a huge amount of encouragement and support from many scientists from across the UK academic community.

In the process, I’ve learned a great deal; nanotechnology of course takes in physics, chemistry, and biology, as well as elements from engineering and medicine. I’ve also come into contact with philosophers and sociologists, as well as artists and designers, from all of whom I’ve learnt new insights. This education will stand me in good stead in my new role at Sheffield – as the Pro-Vice-Chancellor for Research and Innovation I’ll be responsible for the health of research right across the University.

Accelerating evolution in real and virtual worlds

Friday, May 22nd, 2009

Earlier this week I was in Trondheim, Norway, for the IEEE Congress on Evolutionary Computing. Evolutionary computing, as its name suggests, refers to a group of approaches to computer programming that draws inspiration from the natural processes of Darwinian evolution, hoping to capitalise on the enormous power of evolution to find good solutions to complex problems from a very large range of possibilities. How, for example, might one program a robot to carry out a variety of tasks in a changing and unpredictable environment? Rather than an attempting to anticipate all the possible scenarios that your robot might encounter, and then writing control software that specified appropriate behaviours for all these possibilities, one could use evolution to select a robot controller that worked best for your chosen task in a variety of environments.

Evolution may be very effective, but in its natural incarnation it’s also very slow. One way of speeding things up is to operate in a virtual world. I saw a number of talks in which people were using simulations of robots to do the evolution; something like a computer game environment is used to simulate a robot doing a simple task like picking up an object or recognising a shape, with success or failure being used as input in a fitness function, through which the robot controller is allowed to evolve.

Of course, you could just use a real computer game. Simon Lucas, from Essex University, explained to me why classic computer games – his favourite is Ms Pac-Man – offer really challenging exercises in developing software agents. It’s sobering to realise that, while computers can beat a chess grand master, humans still have a big edge on computers in arcade games. The human high-score for Ms Pac-Man is 921,360; in a competition in the 2008 IEEE CEC meeting the winning bot achieved 15,970. Unfortunately I had to leave Trondheim before the results of the 2009 competition were announced, so I don’t know whether this year produced a big breakthrough in this central challenge to computational intelligence.

One talk at the meeting was very definitely rooted in the real, rather than virtual, world – this came from Harris Wang, a graduate student in the group of Harvard Medical School’s George Church. This was a really excellent overview of the potential of synthetic biology. At the core of the talk was a report of a recent piece of work that is due to appear in Nature shortly. This described the re-engineering of an micro-organism to increase its production of the molecule lycopene, the dye that makes tomatoes red (and probably confers significant health benefits, the basis for the seemingly unlikely claim that tomato ketchup is good for you). Notwithstanding the rhetoric of precision and engineering design that often accompanies synthetic biology, what made this project successful was the ability to generate a great deal of genetic diversity and then very rapidly screen these variants to identify the desired changes. To achieve a 500% increase in lycopene production, they needed to make up to 24 simultaneous genetic modifications, knocking out genes involved in competing processes and modifying the regulation of other genes. This produced a space of about 15 billion possible combinatorial variations, from which they screened 100,000 distinct new cell types to find their winner. This certainly qualifies as real-world accelerated evolution.

How to engineer a system that fights back

Sunday, May 10th, 2009

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”