A little history of bionanotechnology and nanomedicine

I wrote this piece as a briefing note in connection with a study being carried out by the Nuffield Council on Bioethics about Emerging Biotechnologies. I’m not sure whether bionanotechnology or nanomedicine should be considered as emerging biotechnologies, but this is an attempt to sketch out the connections.

Nanotechnology is not a single technology; instead it refers to a wide range of techniques and methods for manipulating matter on length scales from a nanometer or so – i.e. the typical size of molecules – to hundreds of nanometers, with the aim of creating new materials and functional devices. Some of these methods represent the incremental evolution of well-established techniques of applied physics, chemistry and materials science. In other cases, the techniques are at a much earlier state, with promises about their future power being based on simple proof-of-principle demonstrations.

Although nanotechnology has its primary roots in the physical sciences, it has always had important relationships with biology, both at the rhetorical level and in practical outcomes. The rhetorical relationship derives from the observation that the fundamental operations of cell biology take place at the nanoscale, so one might expect there to be something particularly powerful about interventions in biology that take place on this scale. Thus the idea of “nanomedicine” has been prominent in the promises made on behalf of nanotechnology from its earliest origins, and as a result has entered popular culture in the form of the exasperating but ubiquitous image of the “nanobot” – a robot vessel on the nano- or micro- scale, able to navigate through a patient’s bloodstream and effect cell-by-cell repairs. This was mentioned as a possibility in Richard Feynman’s 1959 lecture, “Plenty of Room at the Bottom”, which is widely (though retrospectively) credited as the founding manifesto of nanotechnology, but it was already at this time a common device in science fiction. The frequency with which conventionally credentialed nanoscientists have argued that this notion is impossible or impracticable, at least as commonly envisioned, has had little effect on the enduring hold it has on the popular imagination.
Continue reading “A little history of bionanotechnology and nanomedicine”

Science in hard times

How should the hard economic times we’re going through affect the amount of money governments spend on scientific and technological research? The answer depends on your starting point – if you think that science is an optional extra that we do if we’re prosperous, then decreasing prosperity must inevitably mean we can afford to do less science. But if you think that our prosperity depends on the science we do, then if growth is starting to stall, that’s a signal telling you to devote more resources to research. This is a huge oversimplification, of course; the link between science and prosperity can never be automatic. How effective that link will be will depend on the type of science and technology you support, and on the nature of the wider economic system that translates innovations into economic growth. It’s worth taking a look at recent economic history to see some of the issues at play.

Plot of UK real GDP per person and government R&D spend
UK Government spending on research and development compared to the real growth in per capita GDP.

R&D data (red) from the Royal Society Report The Scientific Century adjusted to constant 2005 £s. GDP per person data (blue) from Measuring Worth. Dotted blue line – current projections from the November 2011 forecast of the UK Office of Budgetary Responsibility (uncorrected for population changes).

The graph shows both the real GDP per person in the UK from 1946 up to the present, together with the amount of money, again in real terms, spent by the government on research and development. The GDP graph tells an interesting story in itself, making very clear the discontinuity in economic policy that happened in 1979. In this year Margaret Thatcher’s new Conservative government overthrew a thirty year broad consensus, shared by both parties, on how the economy should be managed. Before 1979, we had a mixed economy, with substantial industrial sectors under state control, highly regulated financial markets, including controls on the flow of capital in and out of the country, and the macro-economy governed by the principles of Keynesian demand management. After 1979, it was not Keynes, but Hayek, who supplied the intellectual underpinning, and we saw progressive privatisation of those parts of the economy under state control, the abolition of controls on capital movements and deregulation of financial markets. In terms of economic growth, measured in real GDP per person, the period between 1946 and 1979 was remarkable, with a steady increase of 2.26% per year – this is, I think, the longest sustained period of high growth in the modern era. Since 1979, we’ve seen a succession of deep recessions, followed by periods of rapid, and evidently unsustainable growth, sustained by asset price bubbles. The peaks of these periods of growth have barely attained the pre-1979 trend line, while in our current economic travails we find ourselves about 9% below trend. Not only does there seem no imminent prospect of the rapid growth we’d need to return to that trend line, but there now seems to be a likelihood of another recession.

The plot for public R&D spending tells its own story, which also shows a turning point with the Thatcher government. From 1980 until 1998, we see a substantial long-term decline in research spending, not just as a fraction of GDP, but in absolute terms; since 1998 research spending has increased again in real terms, though not substantially faster than the rise in GDP over the same period. Underlying the decline were a number of factors. There was a real squeeze on spending in research in Universities, well remembered by those who were working in them at the time. Meanwhile the research spending in those industries that were being privatised – such as telecommunications and energy – was removed from the government spending figures. And the activities of government research laboratories – particularly those associated with defense and the nuclear industry – were significantly wound down. Underlying this winding down of research was both a political motive and an ideological one. Big government spending on high technology was associated with the corporate politics of the 1960’s, subscribed to by both parties but particularly associated with Labour, and the memorable slogan “The White Heat of Technology”. To its detractors this summoned up associations with projects like the supersonic passenger aircraft Concord, a technological triumph but a commercial disaster. To the adherents of the Hayekian free market ideology that underpinned the Thatcher government, the state had no business doing any research but the most basic and far from market. In fact, state-supported research was likely to be not only less efficient and less effectively directed than research in the private sector, but by “squeezing out” such private sector research it would actually make the economy less efficient.

The idea that state support of research reduces support of research by the private sector by “squeezing out” remains attractive to free market ideologues, but the empirical evidence points to the opposite conclusion – state spending and private sector spending on research support each other, with increases in state R&D spending leading to increases in R&D by business (see for example Falk M (2006). What drives business research and development intensity across OECD countries? (PDF), Applied Economics 38 p 533). Certainly, in the UK, the near-halving of government R&D spend between 1980 and 1999 did not lead to an increase in R&D by business; instead, this also fell from about 1.4% of GDP to 1.2%. Not only did those companies that had been privatised substantially reduce their R&D spending, but other major players in industrial R&D – such as the chemical company ICI and the electronics company GEC – substantially cut back their activities. At the time many rationalised this as the inevitable result of the UK economy changing its mix of sectors, away from manufacturing towards service sectors such as the financial service industry.

None of this answers the questions: how much should one spend on R&D, and what difference do changes in R&D spend make to economic performance? It is certainly clear that the decline in R&D spending in the UK isn’t correlated with any improvement in its economic performance. International comparisons show that the proportion of GDP spent on R&D in the UK is significantly lower than most of its major competitors, and within this the proportion of R&D supported by business is itself unusually low . On the other hand, the performance of the UK science base, as measured by academic measures rather than economic ones, is strikingly good. Updating a much-quoted formula, the UK accounts for 3% of the total world R&D spend, it has 4.3% of the world’s researchers, who produce 6.4% of the world’s scientific articles, which attract 10.9% of the world’s citations and produce 13.8% of the world’s top 1% of highly cited papers (these figures come from the analysis in the recent report The International Comparative Performance of the UK Research Base).

This formula is usually quoted to argue for the productivity and effectiveness of the UK research base, and it clearly tells a powerful story about its strength as measured in purely academic terms. But does this mean we get the best out of our research in economic terms? The partial recovery in government R&D spending that we saw from 1998 until last year brought real terms increases in science budgets (though without significantly increasing the fraction of GDP spent on science). These increases were focused on basic research, whose importance as a proportion of total government science spending doubled between 1986 and 2005. This has allowed us to preserve the strength of our academic research base, but the decline in more applied R&D in both government and industrial laboratories has weakened our capacity to convert this strength into economic growth.

Our national economic experiment in deregulated capitalism ended in failure, as the 2008 banking collapse and subsequent economic slump has made clear. I don’t know how much the systematic running down of our national research and development capability in the 1980’s and 1990’s contributed to this failure, but I suspect that it’s a significant part of the bigger picture of misallocation of resources associated with the booms and the busts, and the associated disappointingly slow growth in economic productivity.

What should we do now? Everyone talks about the need to “rebalance the economy”, and the government has just released an “Innovation and Research Strategy for Growth”, which claims that “The Government is putting innovation and research at the heart of its growth agenda”. The contents of this strategy – in truth largely a compendium of small-scale interventions that have already been announced, which together still don’t fully reverse last year’s cuts in research capital spending – are of a scale that doesn’t begin to meet this challenge. What we should have seen is, not just a commitment to maintain the strength of the fundamental science base, important though that is, but a real will to reverse the national decline in applied research.

Can plastic solar cells deliver?

The promise of polymer solar cells is that they will be cheap enough and produced on a large enough scale to transform our energy economy, unlocking the sun’s potential to meet all our energy needs in a sustainable way. But there’s a long way to go from a device in a laboratory, or even a company’s demonstrator product, to an economically viable product that can be made at scale. How big is that gap, are there insuperable obstacles standing in the way, and if not, how long might it take us to get there? Some answers to these questions are now beginning to emerge, and I’m cautiously optimistic. Although most attention is focused on efficiency, the biggest outstanding technical issue is to prolong the lifetime of the solar cells. But before plastic solar cells can be introduced on a mass scale, it’s going to be necessary to find a substitute for indium tin oxide as a transparent electrode. But if we can do this, the way is open for a real transformation of our energy system.

The obstacles are both technical and economic – but of course it doesn’t make sense to consider these separately, since it is technical improvements that will make the economics look better. A recent study starts to break down the likely costs and identify where we need to find improvements. The paper – Economic assessment of solar electricity production from organic-based photovoltaic modules in a domestic environment, by Brian Azzopardi, from Manchester University, with coworkers from Imperial College, Cartagena, and Riso (Energy and Environmental Science 4 p3741, 2011) – breaks down an estimate of the cost of power generated by a polymer photovoltaic fabricated on a plastic substrate by a manufacturing process already at the prototype stage. This process uses the most common combination of materials – the polymer P3HT together with the fullerene derivative PCBM. The so-called “levelised power cost” – i.e. the cost per unit of electricity, including all capital costs, averaged over the lifetime of the plant, comes in between €0.19 and €0.50 per kWh for 7% efficient solar cells with a lifetime of 5 years, assuming southern European sunshine. This is, of course, too expensive both compared to alternatives like fossil fuel or nuclear energy, and to conventional solar cells, though the gap with conventional solar isn’t massive. But the technology is still immature, so what improvements in performance and reductions in cost is it reasonable to expect?

The two key technical parameters are efficiency and lifetime. Most research effort so far has concentrated on improving efficiencies – values greater than 4% are now routine for the P3HT/PCBM system; a newer system, involving a different fullerene derivative, PC70BM blended with the polymer PCDTBT (I find even the acronym difficult to remember, but for the record the full name is poly[9’-hepta-decanyl-2,7- carbazole-alt-5,5-(4’,7’-di-2-thienyl-2’,1’,3’-benzothiadiazole)]), achieves efficiencies greater than 6%. These values will improve, through further tweaking of the materials and processes. Azzopardi’s analysis suggests that efficiencies in the range 7-10% may already be looking viable… as long as the cells last long enough. This is potentially a problem – it’s been understood for a while that the lifetime of polymer solar cells may well prove to be their undoing. The active materials in polymer solar cells – conjugated polymer semiconductors – are essentially overgrown dyes, and we all know that dyes tend to bleach in the sun. Five years seems to be a minimum lifetime to make this a viable technology, but up to now many laboratory devices have struggled to last more than a few days. Another recent paper, however, gives grounds for more optimism. This paper – High Efficiency Polymer Solar Cells with Long Operating Lifetimes, Advanced Energy Materials 1 p491, 2011), from the Stanford group of Michael McGehee – demonstrates a PCDTBT/PC70BM solar cell with a lifetime of nearly seven years. This doesn’t mean all our problems are solved, though – this device was encapsulated in glass, rather than printed on a flexible plastic sheet. Glass is much better than plastics at keeping harmful oxygen away from the active materials; to reproduce this lifetime in an all-plastic device will need more work to improve the oxygen barrier properties of the module.

How does the cost of a plastic solar cell break down, and what reductions is it realistic to expect? The analysis by Azzopardi and coworkers shows that the cost of the system is dominated by the cost of the modules, and the cost of the modules is dominated by the cost of the materials. The other elements of the system cost will probably continue to decrease anyway, as much of this is shared in common with other types of solar cells. What we don’t know yet is the extent to which the special advantages of plastic solar cells over conventional ones – their lightness and flexibility – can reduce the installation costs. As we’ve been expecting, the cheapness of processing plastic solar cells means that manufacturing costs – including the capital costs of the equipment to make them – are small compared to the cost of materials. The cost of these materials make up 60-80% of the cost of the modules. Part of this is simply the cost of the semiconducting polymers; these will certainly reduce with time as experience grows at making them at scale. But the surprise for me is the importance of the cost of the substrate, or more accurately the cost of the thin, transparent conducting electrode which coats the substrate – this represents up to half of the total cost of materials. This is going to be a real barrier to the large scale uptake of this technology.

The transparent electrode currently used is a thin layer of indium tin oxide – ITO. This is a very widely used material in touch screens and liquid crystal displays, and it currently represents the major use of the metal indium, which is rare and expensive. So unless a replacement for ITO can be found, it’s the cost and availability of this material that’s going to limit the use of plastic solar cells. Transparency and electrical conductivity don’t usually go together, so it’s not straightforward to find a substitute. Carbon nanotubes, and more recently graphene, have been suggested, but currently they’re neither good enough by themselves, nor is there a process to make them cheaply at scale (a good summary of the current contenders can be found in Rational Design of Hybrid Graphene Films for High-Performance Transparent Electrodes by Zhu et al, ACS Nano 5 p6472, 2011). So, to make this technology work, much more effort needs to be put into finding a substitute for ITO.

Are you a responsible nanoscientist?

This is the pre-edited version of a piece which appeared in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be viewed here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, we saw the European Commission recommend a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the UK-based Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another code – the UK government’s Universal Ethical Code for Scientists – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often feel rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.

Good capitalism, bad capitalism and turning science into economic benefit

Why isn’t the UK more successful at converting its excellent science into wealth creating businesses? This is a perennial question – and one that’s driven all sorts of initiatives to get universities to handle their intellectual property better, to develop closer partnerships with the private sector and to create more spinout companies. Perhaps UK universities shied away from such activities thirty years ago, but that’s not the case now. In my own university, Sheffield, we have some very successful and high profile activities in partnership with companies, such as our Advanced Manufacturing Research Centre with Boeing, shortly to be expanded as part of an Advanced Manufacturing Institute with heavy involvement from Rolls Royce and other companies. Like many universities, we have some interesting spinouts of our own. And yet, while the UK produces many small high tech companies, we just don’t seem to be able to grow those companies to a scale where they’d make a serious difference to jobs and economic growth. To take just one example, the Royal Society’s Scientific Century report highlighted Plastic Logic, a company based on great research from Richard Friend and Henning Sirringhaus from Cambridge University making flexible displays for applications like e-book readers. It’s a great success story for Cambridge, but the picture for the UK economy is less positive. The company’s Head Office is in California, its first factory was in Leipzig and its major manufacturing facility will be in Russia – the latter fact not unrelated to the fact that the Russian agency Rusnano invested $150 million in the company earlier this year.

This seems to reflect a general problem – why aren’t UK based investors more willing to put money into small technology based companies to allow them to grow? Again, this is something people have talked about for a long time, and there’ve been a number of more or less (usually less) successful government interventions to address the issue. Only the latest of these was announced at the Conservative party conference speech by the Chancellor of the Exchequer, George Osborne – “credit easing” to “help solve that age old problem in Britain: not enough long term investment in small business and enterprise.”

But it’s not as if there isn’t any money in the UK to be invested – so the question to ask isn’t why money isn’t invested in high tech businesses, it is why money is invested in other places instead. The answer must be simple – because those other opportunities offer higher returns, at lower risk, on shorter timescales. The problem is that many of these opportunities don’t support productive entrepreneurship, which brings new products and services to people who need them and generates new jobs. Instead, to use a distinction introduced by economist William Baumol (see, for example, his article Entrepreneurship: Productive, Unproductive, and Destructive, PDF), they support unproductive entrepreneurship, which exploits suboptimal reward structures in an economy to make profits without generating real value. Examples of this kind of activity might include restructuring companies to maximise tax evasion, speculating in financial and property markets when the downside risk is shouldered by the government, exploiting privatisations and public/private partnerships that have been structured to the disadvantage of the tax-payer, and generating capital gains which result from changes in planning and tax law.

Most criticism of this kind of bad capitalism focuses on issues of fairness and equity, and on the damage to the democratic process done by the associated lobbying and influence-peddling. But it causes deeper problems than this – money and effort used to support unproductive entrepreneurship is unavailable to support genuine innovation, to create new products and services that people and society want and need. In short, bad capitalism crowds out good capitalism, and innovation suffers.

Some questions for British research policy

This piece is based on a summing-up I did at a meeting in London this March: A New Mandate? Research Policy in the 21st Century.

There seem to be two lurking worries that concern people in science policy in the UK at the moment. The first is the worry that, having built a case for state support of science on the basis that this will lead to innovation and economic growth, that innovation and economic growth may not be delivered. The second is that the scientific enterprise doesn’t have a sufficiently broad base of popular support. In short, are we suffering from an innovation deficit, and does our research effort have a democratic deficit?

An innovation deficit

The letter with the funding settlement from BIS to the Research Councils called for “even more impact” – the impact agenda in research councils and funding agencies really is accompanied by a sense of increased urgency of an argument that is by no means settled.

To many scientists the economic case for supporting science may seem self-evident, but the solid evidence in support of this is surprisingly slippery. There is certainly the feeling in some quarters – and not just the Guardian’s Simon Jenkins – that the economic impact of science has been oversold. The Royal Society’s “The Scientific Century” document was a serious attempt to assemble the evidence. What strikes me, though, is that it doesn’t make a great deal of sense to try and give an answer to the primary question – to what extent should the state support science – without considering the much broader question of how our political and economic system is set up to support innovation.

And it is in relation to innovation that there are some more general worries, both at a global level and in our own national circumstances:

  • Is the rate of innovation actually slowing – leaving aside the special case of information technology, have the easiest gains from new technology already been made? I discussed this in an earlier post Accelerating Change or Innovation Stagnation?
  • Is our UK innovation system broken? In the UK postwar settlement, universities were only one of a number of kinds of places where research – especially more applied research – was carried out. Major conglomerates like ICI and GEC had large corporate laboratories, there were major government laboratories associated with organisations like the Atomic Energy Authority, and the military supported laboratories like RSRE Malvern which combined quite basic research with more strategic research and development. In the post-Thatcher climate of privatisation, deregulation and the drive to “unlock shareholder value” most of these alternative research organisations have disappeared.
  • In their place, we see a new emphasis on the development of protectable intellectual property in Universities with a view to creating venture-capital backed spin-out companies. This gives rise to two questions – how effective is this as a mechanism for technology transfer, and does the new emphasis on protectable IP have any deleterious effects on innovation itself? Certainly, the experience of nano- and bio- technology does point to potential problems of patent thickets and an “anti-commons” effect in academia, where pre-existing IP positions inhibit other scientists from working in particular areas. It’s these worries, among other factors, that have driven a move to a more open-source approach, now spreading from IT to new areas like synthetic biology.
  • For the UK, the pharmaceutical industry has been particularly important, as an industry of genuinely international stature which has been politically very important in making the case for state-supported science (and influencing the shape of that support). So the fact that this industry is having innovation difficulties of its own – the closure of the Pfizer R&D site at Sandwich being a very visible signal of this – is worrying.
  • We’re seeing the introduction of a new kind of institution into the innovation landscape – the Technology and Innovation Centres. There’s still uncertainty about their role and some governance issues are still unclear, but what’s most significant is that there is a widely perceived gap that they are intended to fill.
  • A democratic deficit

    The idea that we’re in the midst of a popular crisis in trust in science is deeply embedded. I’m not convinced that the crisis in trust is with science itself, rather than the use of science in politics and commerce, which is something slightly different, but nonetheless this idea has been a driving force for much of the new enthusiasm for public engagement and dialogue, and for taking this public engagement upstream. While some people (including me) would want to set this move as part of a broader move to steer technology to meet widely shared societal goals, there is still a sense that for many, this is still seen as being about gaining acceptance for new technologies.

    On the face of it, these two worries – of an innovation deficit and of a democratic deficit – look to be in opposition. The idea of an innovation deficit suggests that our problem is that technology isn’t moving fast enough, and we have to work to remove obstacles in the way of innovation, while the negative perception of public engagement holds that its job is to put those obstacles back in the way. In fact, in times like now this perception is a real danger.

    But actually they’re quite closely connected. Underneath these dilemmas are two worries – a loss of confidence in the self-organising capability of the scientific enterprise, and a sense that something’s missing in our innovation system.

    Research councils – “from funder to sponsor”

    It’s these worries that underly current moves in the UK research councils, perhaps most explicitly defined by EPSRC, in their aim of “moving from funder to a sponsor” – i.e. moving from the position of responding to the agenda of the scientific community, towards commissioning research in support of national needs.

    The issues then are, how is national need defined, and how is the process of defining that national need given legitimacy?

    This is a big problem in our current system, where our political fashion is explicitly not to define such a need in anything other than rather general and vacuous terms (like saying we need to have a “knowledge economy”). To pose the question in its most pointed form, does it make sense to have a science policy if you don’t have an industrial policy?

    This situation puts research councils in a very difficult position. If governments are not prepared to develop such an industrial policy, how can the research councils do this – how can they do it practically, and how can their decisions acquire legitimacy?

    These legitimacy problems come in three directions:
    1. with the scientific community
    2. with the government
    3. with the population at large.

    The scientific community will see a potential clash with the Haldane principle (invented tradition though David Edgerton says this is), which could be interpreted as saying that the scientific community is the primary source, as an embodiment of the principle of autonomy of the scientific enterprise.

    With the government, a research council like EPSRC is in a very difficult position. They have to deliver the science in support of a national policy which does not, in fact, exist, but they will be judged by very instrumental measures of wealth creation.

    Can “challenge-led” research help?

    “Societal challenges” offer a new synthesis that can be considered a response to this. I find this attractive as a way of getting beyond a sterile dichotomy between applied and basic research, but the definitions of what might be meant by a societal challenge are contested, value-laden and full of interpretive flexibility.

    Societal challenges do have an advantage, in having a certain security in the face of political uncertainty and lack of direction, and a certain independence from political whims. Who can really disagree with the idea that sustainable energy will be a big deal on rather long timescales, for example?

    But there are problems – can governments genuinely take a long enough view? How can we avoid fads and the herd mentality? How can we be prepared for the inevitable unanticipated changes in direction in world events? how can we move from generalities to the particularities of real technologies?

    What is the place of public engagement? On the one hand, what better way of getting a direct view about what national need should be than consulting the public directly? Public engagement then presents itself as a partial solution to the problem of legitimacy, but one that isn’t necessarily going to make their relationship with government any easier.

    There is one other set of institutions that, strangely, don’t get mentioned very often. Those are the Universities. What’s their role? Can they be more than just a loose coalition of individual researchers responding to the incentives and demands of the research councils and other funders? Universities have their own considerable intellectual resources across the disciplines, and they have their own long history and independence, so one might hope that Universities themselves could be another focus for reasserting the public value of research. For a civic university like my own, Sheffield, surely the University should as a focus for the aspirations of the community it serves.

    Science and politics

    There is another driving force for public engagement; the sense that representative government is failing to provide a space for discussing big issues about our future choices and how people want to live their lives. Science and technology have to be a part of this discussion, and this is why discussions about science and technology must have a political dimension. There are those who assert the opposite – that science doesn’t have or shouldn’t have a political dimension, and that technology is autonomous, out of control, and can’t be directed. But these assertions are themselves profoundly political statements.

    Why has the UK given up on nanotechnology?

    In a recent roundup of nanotechnology activity across the world, the consultancy Cientifica puts the UK’s activity pretty much at the bottom of the class. Is this a fair reflection of the actual situation? Comparing R&D numbers across countries is always difficult, because of the different institutional arrangements and different ways spending is categorised; but, broadly, this feels about right. Currently, the UK has no actual on-going nanotechnology program. Activity continues in projects that are already established, but the current plans for government science spending in the period 2011- 2015, as laid out in the various research council documents, reveal no future role for nanotechnology. The previous cross-council program “Nanoscience engineering through application” has been dropped; all the cross-council programmes now directly reflect societal themes such as “ageing population, environmental change, global security, energy, food security and the digital economy”. The delivery plan for the Engineering and Physical Science Research Council, previously the lead council for nanotechnology, does not even mention the word, while the latest strategy document for the Technology Strategy Board, responsible for nearer-market R&D support, notes in a footnote that nanotechnology is “now embedded in all themes where there are such opportunities”.

    So, why has the UK given up on nanotechnology? I suggest four reasons.

    1. The previous government’s flagship nanotechnology program – the network of Micro- and Nano- Technology centres (the MNT program) is perceived as having failed. This program was launched in 2003, with initial funding of £90 million, a figure which subsequently was intended to rise to £200 million. But last July, the new science minister, David Willetts, giving evidence to the House of Commons Science and Technology Select Committee, picked on nanotechnology as an area in which funding had been spread too thinly, and suggested that the number of nanotechnology centres was likely to be substantially pruned. To my knowledge, none of these centres has received further funding. In designing the next phase of the government’s translational research centres – a new network of Technology and Innovation Centres, loosely modelled on the German Fraunhofer centres, it seems that the MNT program has been regarded as a cautionary tale of how not to do things, rather than an example to build on, and nanotechnology in itself will play little part in these new centres (though, of course, it may well be an enabling technology for things like a regenerative medicine).

    2. There has been no significant support for nanotechnology from the kinds of companies and industries that government listens to. This is partly because the UK is now weak in those industrial sectors that would be expected to be most interested in nanotechnology, such as the chemicals industry and the electronics industry. Large national champions in these sectors with the power to influence government, in the way that now-defunct conglomerates like ICI and GEC did in the past, are particularly lacking. Companies selling directly to consumers, in the food and personal care sectors, have been cautious about being too closely involved in nanotechnology for fear of a consumer backlash. The pharmaceutical industry, which is still strong in the UK, has other serious problems to deal with, so nanotechnology has been, for them, a second order issue. And the performance of small, start-up companies based on nanotechnology, such as Oxonica, has been disappointing. The effect of this was brought home to me in March 2010, when I met the then Science Minister, Lord Drayson, to discuss on behalf of the Royal Society the shortcomings of the latest UK Nanotechnology Strategy. To paraphrase his response, he said he knew the strategy was poor, but that was the fault of the nanotechnology community, which had not been able to get its act together to convince the government it really was important. He contrasted this with the space industry, which had been able to make what to him was a very convincing case for its importance.

    3. The constant criticism that the government was receiving about its slow response to issues of the safety and environmental impact of nanotechnology was, I am sure, a source of irritation. The reasons for this slow response were structural, related to the erosion of support for strategic science within government (as opposed to the kind of investigator led science funded by the research councils – see this blogpost on the subject from Jack Stilgoe), but in this environment civil servants might be forgiven for thinking that this issue had more downside than upside.

    4. Within the scientific community, there were few for whom the idea of nanotechnology was their primary loyalty. After the financial crisis, when it was clear that big public spending cuts were likely and their were fears of very substantial cuts in science budgets, it was natural for scientists either to lobby on behalf of their primary disciplines or to emphasise the direct application of their work to existing industries with strong connections to government, like the pharmaceutical and aerospace industries. In this climate, the more diffuse idea of nanotechnology slipped down a gap.

    Does it matter that, in the UK, nanotechnology is no longer a significant element of science and innovation policy? On one level, one could argue that it doesn’t. Just because nanotechnology isn’t an important category by which science is classified by, this doesn’t mean that the science that would formerly have been so classified doesn’t get done. We will still see excellent work being supported in areas like semiconductor nanotechnology for optoelectronics, plastic electronics, nano-enabled drug delivery and DNA nanotech, to give just a few examples. But there will be opportunities missed to promote interdisciplinary science, and I think this really does matter. In straitened times, there’s a dangerous tendency for research organisations to retreat to core business, to single disciplines, and we’re starting to see this happening now to some extent. Interdisciplinary, goal-oriented science is still being supported through the societal themes, like the programs in energy and ageing, and it’s going to be increasingly important that these themes do indeed succeed in mobilising the best scientists from different areas to work together.

    But I worry that it very much does matter that the UK’s efforts at translating nanotechnology research into new products and new businesses has not been more successful. But this is part of a larger problem. The UK has, for the last thirty years, not only not had an industrial policy to speak of, it has had a policy of not having an industrial policy. But the last three years have revealed the shortcomings of this, as we realise that we aren’t any more going to be able to rely on a combination of North Sea oil and the ephemeral virtual profits of the financial services industry to keep the country afloat

    On Impact

    This somewhat policy-heavy piece is an updated version of a talk I gave at a higher education policy conference last September – my apologies for blog readers not directly concerned with science and University funding in the UK, who may find it less enthralling.

    What is this thing called “impact”, which has such a grip on Universities and funding agencies in the UK at the moment? Of course, it isn’t a thing at all; it’s a word that’s been adopted to stand for a number of overlapping, but still distinct, imperatives that are being felt by different public agencies concerned with different aspects of funding research in higher education in the UK, and which, in turn, different constituencies within UK higher education are attempting to steer.

    The most immediate sources of talk about “impact” are the Higher Education Funding Council of England (HEFCE) and the different research councils, who operate jointly in this area under the umbrella of Research Councils UK (RCUK). These two manifestations of this impact agenda are, in fact, two rather different and separate issues. HEFCE wish to measure the impact of past research, as part of their overall program to assess the past research performance of Universities – the Research Excellence Framework – which subsequently will inform future allocations of funding to the Universities. RCUK, on the other hand, wishes to ensure that the research it funds is carried out in a way that maximises the chance that it does have impact. Both HEFCE and RCUK want the idea of impact to have a greater influence on funding decisions. But HEFCE’s version of impact is backward looking and concerned with measurement, RCUK’s interest is forward looking and concerned with changing behaviours.

    It is important to understand the wider context which has driven this concern with impact. The immediate pressure has come from the funding council’s perception of a growing need to convince the Treasury that public spending on research brings a proportionate return to the UK as a whole. During the process of settling the science budget last autumn, in a very tight public spending round, this argument within government, has been dominant. And, to the extent that the budget settlement was not as bad as many had feared, perhaps this idea of impact did gain some traction. Certainly, last December’s letter (PDF here) announcing the science settlement called for “even more impact” – saying “Research Councils and Funding Councils will be able to focus their contribution on promoting impact through excellent research, supporting the growth agenda. They will provide strong incentives and rewards for universities to improve further their relationships with business and deliver even more impact in relation to the economy and society.”

    But this focus on impact is only one manifestation of a much wider discussion about the value of research to society at large and how the values that underly publicly funded research should be aligned with widely shared societal values. The broader question is how we organise publicly funded research to realise its public value. For leaders and managers of HE institutions engaged in publicly funded research, this leads to fundamental questions about the missions and visions of their institutions and how this is communicated to their members.

    What do we actually mean by “impact”? This, of course, is a highly contested question – there is a growing perception that the degree to which a particular discipline has a greater or lesser degree of impact on the wider world is directly connected to its value in the eyes of funding agencies, and so it’s not surprising that disciplines will wish to influence the definition of impact to maximise their contributions. Clearly science, engineering, medicine, social sciences, arts and the humanities will come at the problem with different emphases. The funding agencies will reflect a compromise position back to the academic communities they serve, while tailoring the message a different way in their interactions with their political masters.

    HEFCE must, necessarily, take a broad view of impacts, as they serve the whole academic community. Engineers may emphasise the direct economic benefits that come from their research, social scientists information to underpin good public policy, while the humanities possibly more intangible cultural benefits. The task that HEFCE has set itself is devising a framework to measure and compare these incommensurable qualities. The methodology is starting to become clear. A pilot exercise tested a trial methodology in a number of different Universities in a handful of rather different subjects. The methodology combines the use of quantitative indicators, where appropriate, and narrative case studies, in which the external impact of research carried out by groups of researchers over some past period is described. The results of the pilot highlighted some predictable difficulties, and suggested some mitigating strategies. The timescales on which impact appears vary greatly from subject to subject, and even within subjects. For much research, impacts are captured outside higher education, whether that’s as a result of transfer of people from HE into industry or public service, or by the picking up of research ideas that are effectively in the public domain. As a result, the originators of research may well not be in a position to know about the impacts of their research.

    The research councils have the apparent advantage that they can tailor the idea of impact more closely to their own constituencies. For the Medical Research Council (MRC), for example, it’s clear that improved health and well-being will be the primary category of their impact (though even here there may be many different routes to achieving those broad goals). The Engineering and Physical Sciences Research Council (EPSRC) will tend to emphasise economic impacts through spin-outs and partnerships with existing industry. Many researchers will be concerned that the growing emphasis on impact will lead inexorably towards a move from pure, curiosity-driven research to more applied research. The counter-argument from the research councils will be to emphasise that this is not what they want; instead they seek a more conscious consideration of why the impact of the research they sponsor matters. This emphasises the forward-looking nature of the impact agenda as understood by RCUK – the sections in research council grant applications about “pathways to impact” don’t seek to ask researchers to predict the future, instead they seek to change the behavior of researchers.

    It’s clear that defining and assessing impact isn’t easy; the Science Minister, David Willetts, had earlier made his reservations about this clear. In a speech in July last year he announced a delay in the Research Excellence Framework, saying “The surprising paths which serendipity takes us down is a major reason why we need to think harder about impact. There is no perfect way to assess impact, even looking backwards at what has happened. I appreciate why scientists are wary, which is why I’m announcing today a one-year delay to the implementation of the Research Excellence Framework, to figure out whether there is a method of assessing impact which is sound and which is acceptable to the academic community. This longer timescale will enable HEFCE, its devolved counterparts, and ministers to make full use of the pilot impact assessment exercise which concludes in the Autumn, and then to consider whether it can be refined. “

    At the moment, though, the views of the Treasury are as important as the views of the Minister. It’s difficult to avoid the suspicion that, for all the subtlety with which RCUK and HEFCE have defined the many dimensions of impact, the Treasury is interested in only one type of impact – money. This sounds more straightforward, but it’s still not easy – we need for a robust evidence base for the assertion that spending on research yields tangible commensurate economic returns.

    It isn’t just in the UK that these arguments are being carried on. In the USA, for example, the large injection of funding into science as part of the economic stimulus package have prompted the “Star Metrics” programme. In the UK, the Royal Society released in March last year an extensive study – “The Scientific Century” – which marshalled the evidence for the returns on investment in publicly funded R&D (concentrating on science, medicine and engineering).

    Even in this restricted domain, the complications of the routes by which public investment in research produce returns become apparent. There was, for many years, a clear consensus in western countries about the way in which the value of publicly funded science emerges. This consensus originates in an enormously influential document written by the US science administrator, Vannevar Bush, in 1945 – “Science: the Endless Frontier”. This is the document that led to the foundation of the USA’s National Science Foundation. It encapsulated what, to many people, has become known as the “linear model of innovation” – the idea that pure science, curiosity driven and carried out without any consideration of its end-uses, would be converted into national prosperity through a linear process of applied science and technological development. Of course, the impact agenda, as conceived by the research councils, is in direct contradiction of this world-view – and since this view is deeply ingrained in many parts of the scientific community, this accounts for the deep-seated unease in those quarters that the RCUK view of impact gives rise to. And, if it were that simple, surely the measurement of past impacts would be straightforward?

    However, the linear model is now very much out of fashion – it is considered by many to be neither an accurate picture of how research has worked in the past, nor a desirable prescription for how research ought to work in the future. To return to our current Science Minister, it is clear that he doesn’t believe it at all. In his July speech, he said: “The previous government appeared to think of innovation as if it were a sausage machine. You’re supposed to put money into university-based scientific research, which leads to patents and then spinout companies that secure venture capital backing….The world does not work like this as often as you might think…. There are many other ways of harvesting benefits from research. But the benefits are real”.

    One of the most influential critiques of the linear model came in a came in a book by Donald Stokes called Pasteur’s quadrant. This argued that the separation of basic research from considerations of potential applications which is made explicit in Bush’s picture didn’t always correspond to the reality of how research has been done. There have certainly been scientists who have carried out fundamental investigations without any thought of potential use – Niels Bohr is the example Stokes used. And, as Bush argued, sometimes very practical applications do in fact emerge from such work. There have been technologists who have focused solely on the need to get their inventions to work and to market, without a great deal of curiosity about the fundamental underpinnings of those technologies – Thomas Edison being a classical example. But a scientist like Louis Pasteur carried out fundamental research – in his case, laying many of the foundations of modern microbiology, while at the same time being motivated by the very practical considerations of how wine ferments and milk sours.

    On Stokes’s diagram, which has two axes defined by the degree to which considerations of use and fundamental interest motivate research, we have three quadrants typified by the approach of Bohr, Edison and Pasteur. What occupies the fourth quadrant, where the work is characterised by being neither fundamentally interesting nor practically useful? In the past this undesirable quadrant hasn’t had a name, but I propose to call it “Cable’s quadrant”, after the UKs secretary of state for Business, Innovation and Skills, who said in a speech on 8 September last year “there is no justification for taxpayers money being used to support research which is neither commercially useful nor theoretically outstanding.” Of course, no-one sets out to carry out research of this kind; the question is how to minimise the chance of research turning out this way without the risk of discouraging high-risk research that, if it did succeed, would be truly transformative.

    There remains an unanswered question in Stokes’s formulation – who decides what is practically useful? Is this simply a matter of what has commercial applications? In the context of UK publicly funded research, this must be related to the broader question of who we, in Universities, work for. Universities are independent and autonomous institutions, so while they must respond to the immediate demands of their funders, they must always be mindful of their enduring sense of mission. How can we resolve this tension? One idea that might be helpful is the notion of “public value”, as applied to science policy in a pamphlet from Demos – The public value of science”. But it should be clear that the drive for research councils, in particular, to move beyond criteria for “good science” that are entirely defined by scientists, on the basis of their own disciplinary norms, to judging science on the basis of what are perceived as the needs of the nation, will present some severe problems of its own, which I will perhaps discuss in a later post.

    What would a truly synthetic biology look like?

    This is the pre-edited version of an article first published in Physics World in July 2010. The published version can be found here (subscription required). Some of the ideas here were developed in a little more technical detail in an article published in the journal Faraday Discussions, Challenges in Soft Nanotechnology (subscription required). This can be found in a preprint version here. See also my earlier piece Will nanotechnology lead to a truly synthetic biology?.

    On the corner of Richard Feynman’s blackboard, at his death, was the sentence “What I cannot create, I do not understand”. This slogan has been taken as the inspiration for the emerging field of synthetic biology. Biologists are now unravelling the intricate and complex mechanisms that underlie life, even in its simplest forms. But, can we be said truly to understand biology, until it proves possible to create a synthetic life-form?

    Craig Venter’s well-publicised program to replace the DNA in a simple microorganism with a new, synthetic genome has been widely reported as the moment when humans have created a new, synthetic living organism. This achievement was certainly a technical tour-de-force, but many would argue that just replacing the genome of an existing organism isn’t the same as creating a complete organism from the bottom up. Making a truly synthetic biology, in which all the components and mechanisms are designed and made without the use of existing biological materials or parts, is a much more distant and challenging prospect. But it is this, hugely more ambitious, act of creation that would fulfil Feynman’s criterion for truly understanding even the simplest forms of life.

    What we have learnt from biology is how similar all life is – when we study biology, we are studying the many diverse branches from a single trunk, huge and baroque variety on one hand, but all variants on a single basic theme based on DNA, RNA and proteins. We’d like to find some general rules, not just about the one particular biology we know about, but about all possible biologies. It is this more general understanding that will help us in one of science’s deepest questions – was the origin of life on earth a random and improbably event, or should we expect to find life all over the universe, perhaps on many of the the exo-planets we’re now discovering? Exo-biology has a practical difficulty, though – even if we can detect the signatures of alien life-forms, distance will make it difficult to study them in detail. So what better way of understanding alien life than trying to build it ourselves?

    But we can’t start building life without having an understanding of what life is. The history of attempts to provide a succinct, water-tight definition of life is very long and rather inconclusive. There are some recurring themes, though. Many definitions focus on life’s ability to self-replicate and evolve and the ability of living organisms to maintain themselves by transforming external matter and free energy into their own components. The principle of living things as being autonomous agents – able to sense their environment and choose between actions on the basis of this information – is appealing. But while people may agree on the ingredients of a definition, putting these together to make one which is neither too exclusive nor too inclusive is difficult. (I very much like the discussion of this issue in Pier Luigi Luisi’s excellent book The emergence of life).

    An experimental approach to the problem might change the question – instead of asking “what life is” we could ask “what life does”. Rather than asking for a waterproof definition of life itself, we can make progress by asking what sort of things living things do, and then consider how we might execute these functions experimentally. Here we’re thinking explicitly of biology as a series of engineering problems. Given the scale of the basic unit of biology – the cell – what we’re considering is essentially a form of nanotechnology.

    But not all nanotechnologies are the same; we’re asking how to make functional machines and devices in an environment dominated by the presence of water, the effects of Brownian motion, and some subtle but important interactions between surfaces. This nanoscale physics – very different to the rules that govern macroscopic engineering – gives rise to some new design principles, much exploited in biological systems. These principles include the idea of self-assembly – molecules that put themselves together under the influence of Brownian motion and surface forces, constructing complex structures whose design is entirely encoded within the molecules themselves. This is one example of the mutability that is so characteristic of soft and biological matter – a shifting balance between weak interactions in the face of subtle changes in external conditions causes changes in the organisation and shape of molecules and assemblies of molecules in response to changes in the environment.

    It’s quite difficult to imagine a living organism that doesn’t have some kind of closed compartment to separate the organism from its environment. Cells have membranes and walls of greater or lesser complexity, but at their simplest these are bags made from a double layer of phospholipid molecules, arranged so their hydrophobic tails are sandwiched between two layers of hydrophilic head groups. The synthetic analogue of these membranes are called liposomes; they are easily made and commonly used in cosmetics and drug delivery systems. Polymer chemists make analogues of phospholipids – amphiphilic block copolymers – which make bags called polymersomes which, in some respects, offer much more flexibility of design, often being more robust and allowing precise control of wall thickness. From such synthetic artificial bags, it is a short step to encapsulating systems of chemicals and biochemicals to mimic some kind of metabolism, and in some cases even some level of self-replication. What is more difficult is to be able to control the traffic in and out of the compartment; this ideally would require pores which only allowed certain types of molecules in and out, or that could be opened and closed on certain triggers.

    It is this sensitivity to the environment that proves more complex to mimic synthetically. It’s still not generally appreciated how much information processing power is possessed even by the most apparently simple single celled organisms. This is because biological computing is carried out, not by electrons within transistors, but by molecules acting on other molecules. (Dennis Bray’s book Wetware is well worth reading on this subject). The key elements of this chemical logic are enzymes that perform logical operations, reacting the presence or absence of input molecules by synthesising, or not synthesising, output molecules.

    Efforts to make synthetic analogues of this molecular logic are only at the earliest stages. What is needed is a molecule that changes shape in the presence of an input molecule, and for this shape change to turn on or off some catalytic activity. In biology, it is proteins that carry out this function; the only synthetic analogues made so far are built from DNA (see my earlier essay Molecular Computing for more details and references).

    Given molecular logic elements whose outputs are other molecules, one can start to build networks linking many logic gates. In biology these networks integrate information about the cell’s environment and make decisions about different courses of action the cell can take – to swim towards food, or away from danger, for example.

    In order for a bacteria sized object to be able to move – to swim through a fluid or crawl along a surface – it needs to solve some very interesting physics problems. For such a small object, it’s the viscosity of the fluid that dominates resistance to motion, in contrast to the situation at human scales, where it’s the inertia of the fluid that needs to be overcome. In these situations of very low Reynolds number new swimming strategies need to be found. Bacteria often use the beating motion of tiny threads – flagellae or ciliae – to push themselves forward. At Sheffield we’ve been exploring another way of making microscopic swimmers – catalysing a chemical reaction on one half of the particle, producing an asymmetric cloud of reaction products that pushes the particle forward by osmotic pressure (more details here. But even though we can make artificial swimmers, we still don’t know how to control and steer them.

    By now it should be obvious that the task of creating a truly synthetic biology remains a very distant goal. The more that biologists discover –particularly now they can use the tools of single molecule biophysics to unravel the mechanisms of the sophisticated molecular machines within even the simplest types of organism – the cruder our efforts to mimic some of the features of cell biology seem. We do have a reasonable understanding of some important principles of nano-scale design – how to design macromolecules to make to self-assembled structures resembling cell membranes, for example. But other areas are still wide open, from the fundamental theoretical issues around how to understand small systems driven far from equilibrium, through the intricacies of mechanisms to achieve accurate self-replication, to the challenge of designing chemical computers. On a practical level, to cope with this level of complexity we’re probably going to have to do what Nature does, and use evolutionary design methods. But if the goal is distant, we’ll learn a great deal from trying. Even to speculate about what a truly synthetic life-form might look like is itself helpful in sharpening our notions of what we might consider to be alive. It is this kind of experimental approach that will help us to find out the physical principles that underlie biology – not just the biology we know about, but all possible biologies.

    Three things that Synthetic Biology should learn from Nanotechnology

    I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

    It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

    1. Mind that metaphor
    Metaphors in science are powerful and useful things, but they come with two dangers:
    a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
    b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

    Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

    Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

    On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

    Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

    2. Blowing bubbles in the economy of promises

    Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

    The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

    There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

    The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

    3. It’s not about risk, it’s about trust

    The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.