New Dawn Fades?

Before K. Eric Drexler devised and proselytised for his particular, visionary, version of nanotechnology, he was an enthusiast for space colonisation, closely associated with another, older, visionary for a that hypothetical technology – the Princeton physicist Gerard O’Neill. A recent book by historian Patrick McCray – The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future – follows this story, setting its origins in the context of its times, and argues that O’Neill and Drexler are archetypes of a distinctive type of actor at the interface between science and public policy – the “Visioneers” of the title. McCray’s visioneers are scientifically credentialed and frame their arguments in technical terms, but they stand at some distance from the science and engineering mainstream, and attract widespread, enthusiastic – and sometimes adulatory – support from broader mass movements, which sometimes take their ideas in directions that the visioneers themselves may not always endorse or welcome.

It’s an attractive and sympathetic book, with many insights about the driving forces which led people to construct these optimistic visions of the future. Continue reading “New Dawn Fades?”

Why isn’t the UK the centre of the organic electronics industry?

In February 1989, Jeremy Burroughes, at that time a postdoc in the research group of Richard Friend and Donal Bradley at Cambridge, noticed that a diode structure he’d made from the semiconducting polymer PPV glowed when a current was passed through it. This wasn’t the first time that interesting optoelectronic properties had been observed in an organic semiconductor, but it’s fair to say that it was the resulting Nature paper, which has now been cited more than 8000 times, that really launched the field of organic electronics. The company that they founded to exploit this discovery, Cambridge Display Technology, was floated on the NASDAQ in 2004 at a valuation of $230 million. Now organic electronics is becoming mainstream; a popular mobile phone, the Samsung Galaxy S, has an organic light emitting diode screen, and further mass market products are expected in the next few years. But these products will be made in factories in Japan, Korea and Taiwan; Cambridge Display Technology is now a wholly owned subsidiary of the Japanese chemical company Sumitomo. How is it that despite an apparently insurmountable academic lead in the field, and a successful history of University spin-outs, that the UK is likely to end up at best a peripheral player in this new industry? Continue reading “Why isn’t the UK the centre of the organic electronics industry?”

Responsible innovation – some lessons from nanotechnology

A few weeks ago I gave a lecture at the University of Nottingham to a mixed audience of nanoscientists and science and technology studies scholars with the title “Responsible innovation – some lessons from nanotechnology”. The lecture was recorded, and the audio can be downloaded, together with the slides, from the Nottingham STS website.

Some of the material I talked about is covered in my chapter in the recent book Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies. A preprint of the chapter can be downloaded here: What has nanotechnology taught us about contemporary technoscience?”

Can plastic solar cells deliver?

The promise of polymer solar cells is that they will be cheap enough and produced on a large enough scale to transform our energy economy, unlocking the sun’s potential to meet all our energy needs in a sustainable way. But there’s a long way to go from a device in a laboratory, or even a company’s demonstrator product, to an economically viable product that can be made at scale. How big is that gap, are there insuperable obstacles standing in the way, and if not, how long might it take us to get there? Some answers to these questions are now beginning to emerge, and I’m cautiously optimistic. Although most attention is focused on efficiency, the biggest outstanding technical issue is to prolong the lifetime of the solar cells. But before plastic solar cells can be introduced on a mass scale, it’s going to be necessary to find a substitute for indium tin oxide as a transparent electrode. But if we can do this, the way is open for a real transformation of our energy system.

The obstacles are both technical and economic – but of course it doesn’t make sense to consider these separately, since it is technical improvements that will make the economics look better. A recent study starts to break down the likely costs and identify where we need to find improvements. The paper – Economic assessment of solar electricity production from organic-based photovoltaic modules in a domestic environment, by Brian Azzopardi, from Manchester University, with coworkers from Imperial College, Cartagena, and Riso (Energy and Environmental Science 4 p3741, 2011) – breaks down an estimate of the cost of power generated by a polymer photovoltaic fabricated on a plastic substrate by a manufacturing process already at the prototype stage. This process uses the most common combination of materials – the polymer P3HT together with the fullerene derivative PCBM. The so-called “levelised power cost” – i.e. the cost per unit of electricity, including all capital costs, averaged over the lifetime of the plant, comes in between €0.19 and €0.50 per kWh for 7% efficient solar cells with a lifetime of 5 years, assuming southern European sunshine. This is, of course, too expensive both compared to alternatives like fossil fuel or nuclear energy, and to conventional solar cells, though the gap with conventional solar isn’t massive. But the technology is still immature, so what improvements in performance and reductions in cost is it reasonable to expect?

The two key technical parameters are efficiency and lifetime. Most research effort so far has concentrated on improving efficiencies – values greater than 4% are now routine for the P3HT/PCBM system; a newer system, involving a different fullerene derivative, PC70BM blended with the polymer PCDTBT (I find even the acronym difficult to remember, but for the record the full name is poly[9’-hepta-decanyl-2,7- carbazole-alt-5,5-(4’,7’-di-2-thienyl-2’,1’,3’-benzothiadiazole)]), achieves efficiencies greater than 6%. These values will improve, through further tweaking of the materials and processes. Azzopardi’s analysis suggests that efficiencies in the range 7-10% may already be looking viable… as long as the cells last long enough. This is potentially a problem – it’s been understood for a while that the lifetime of polymer solar cells may well prove to be their undoing. The active materials in polymer solar cells – conjugated polymer semiconductors – are essentially overgrown dyes, and we all know that dyes tend to bleach in the sun. Five years seems to be a minimum lifetime to make this a viable technology, but up to now many laboratory devices have struggled to last more than a few days. Another recent paper, however, gives grounds for more optimism. This paper – High Efficiency Polymer Solar Cells with Long Operating Lifetimes, Advanced Energy Materials 1 p491, 2011), from the Stanford group of Michael McGehee – demonstrates a PCDTBT/PC70BM solar cell with a lifetime of nearly seven years. This doesn’t mean all our problems are solved, though – this device was encapsulated in glass, rather than printed on a flexible plastic sheet. Glass is much better than plastics at keeping harmful oxygen away from the active materials; to reproduce this lifetime in an all-plastic device will need more work to improve the oxygen barrier properties of the module.

How does the cost of a plastic solar cell break down, and what reductions is it realistic to expect? The analysis by Azzopardi and coworkers shows that the cost of the system is dominated by the cost of the modules, and the cost of the modules is dominated by the cost of the materials. The other elements of the system cost will probably continue to decrease anyway, as much of this is shared in common with other types of solar cells. What we don’t know yet is the extent to which the special advantages of plastic solar cells over conventional ones – their lightness and flexibility – can reduce the installation costs. As we’ve been expecting, the cheapness of processing plastic solar cells means that manufacturing costs – including the capital costs of the equipment to make them – are small compared to the cost of materials. The cost of these materials make up 60-80% of the cost of the modules. Part of this is simply the cost of the semiconducting polymers; these will certainly reduce with time as experience grows at making them at scale. But the surprise for me is the importance of the cost of the substrate, or more accurately the cost of the thin, transparent conducting electrode which coats the substrate – this represents up to half of the total cost of materials. This is going to be a real barrier to the large scale uptake of this technology.

The transparent electrode currently used is a thin layer of indium tin oxide – ITO. This is a very widely used material in touch screens and liquid crystal displays, and it currently represents the major use of the metal indium, which is rare and expensive. So unless a replacement for ITO can be found, it’s the cost and availability of this material that’s going to limit the use of plastic solar cells. Transparency and electrical conductivity don’t usually go together, so it’s not straightforward to find a substitute. Carbon nanotubes, and more recently graphene, have been suggested, but currently they’re neither good enough by themselves, nor is there a process to make them cheaply at scale (a good summary of the current contenders can be found in Rational Design of Hybrid Graphene Films for High-Performance Transparent Electrodes by Zhu et al, ACS Nano 5 p6472, 2011). So, to make this technology work, much more effort needs to be put into finding a substitute for ITO.

Are you a responsible nanoscientist?

This is the pre-edited version of a piece which appeared in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be viewed here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, we saw the European Commission recommend a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the UK-based Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another code – the UK government’s Universal Ethical Code for Scientists – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often feel rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.

Good capitalism, bad capitalism and turning science into economic benefit

Why isn’t the UK more successful at converting its excellent science into wealth creating businesses? This is a perennial question – and one that’s driven all sorts of initiatives to get universities to handle their intellectual property better, to develop closer partnerships with the private sector and to create more spinout companies. Perhaps UK universities shied away from such activities thirty years ago, but that’s not the case now. In my own university, Sheffield, we have some very successful and high profile activities in partnership with companies, such as our Advanced Manufacturing Research Centre with Boeing, shortly to be expanded as part of an Advanced Manufacturing Institute with heavy involvement from Rolls Royce and other companies. Like many universities, we have some interesting spinouts of our own. And yet, while the UK produces many small high tech companies, we just don’t seem to be able to grow those companies to a scale where they’d make a serious difference to jobs and economic growth. To take just one example, the Royal Society’s Scientific Century report highlighted Plastic Logic, a company based on great research from Richard Friend and Henning Sirringhaus from Cambridge University making flexible displays for applications like e-book readers. It’s a great success story for Cambridge, but the picture for the UK economy is less positive. The company’s Head Office is in California, its first factory was in Leipzig and its major manufacturing facility will be in Russia – the latter fact not unrelated to the fact that the Russian agency Rusnano invested $150 million in the company earlier this year.

This seems to reflect a general problem – why aren’t UK based investors more willing to put money into small technology based companies to allow them to grow? Again, this is something people have talked about for a long time, and there’ve been a number of more or less (usually less) successful government interventions to address the issue. Only the latest of these was announced at the Conservative party conference speech by the Chancellor of the Exchequer, George Osborne – “credit easing” to “help solve that age old problem in Britain: not enough long term investment in small business and enterprise.”

But it’s not as if there isn’t any money in the UK to be invested – so the question to ask isn’t why money isn’t invested in high tech businesses, it is why money is invested in other places instead. The answer must be simple – because those other opportunities offer higher returns, at lower risk, on shorter timescales. The problem is that many of these opportunities don’t support productive entrepreneurship, which brings new products and services to people who need them and generates new jobs. Instead, to use a distinction introduced by economist William Baumol (see, for example, his article Entrepreneurship: Productive, Unproductive, and Destructive, PDF), they support unproductive entrepreneurship, which exploits suboptimal reward structures in an economy to make profits without generating real value. Examples of this kind of activity might include restructuring companies to maximise tax evasion, speculating in financial and property markets when the downside risk is shouldered by the government, exploiting privatisations and public/private partnerships that have been structured to the disadvantage of the tax-payer, and generating capital gains which result from changes in planning and tax law.

Most criticism of this kind of bad capitalism focuses on issues of fairness and equity, and on the damage to the democratic process done by the associated lobbying and influence-peddling. But it causes deeper problems than this – money and effort used to support unproductive entrepreneurship is unavailable to support genuine innovation, to create new products and services that people and society want and need. In short, bad capitalism crowds out good capitalism, and innovation suffers.

Why has the UK given up on nanotechnology?

In a recent roundup of nanotechnology activity across the world, the consultancy Cientifica puts the UK’s activity pretty much at the bottom of the class. Is this a fair reflection of the actual situation? Comparing R&D numbers across countries is always difficult, because of the different institutional arrangements and different ways spending is categorised; but, broadly, this feels about right. Currently, the UK has no actual on-going nanotechnology program. Activity continues in projects that are already established, but the current plans for government science spending in the period 2011- 2015, as laid out in the various research council documents, reveal no future role for nanotechnology. The previous cross-council program “Nanoscience engineering through application” has been dropped; all the cross-council programmes now directly reflect societal themes such as “ageing population, environmental change, global security, energy, food security and the digital economy”. The delivery plan for the Engineering and Physical Science Research Council, previously the lead council for nanotechnology, does not even mention the word, while the latest strategy document for the Technology Strategy Board, responsible for nearer-market R&D support, notes in a footnote that nanotechnology is “now embedded in all themes where there are such opportunities”.

So, why has the UK given up on nanotechnology? I suggest four reasons.

1. The previous government’s flagship nanotechnology program – the network of Micro- and Nano- Technology centres (the MNT program) is perceived as having failed. This program was launched in 2003, with initial funding of £90 million, a figure which subsequently was intended to rise to £200 million. But last July, the new science minister, David Willetts, giving evidence to the House of Commons Science and Technology Select Committee, picked on nanotechnology as an area in which funding had been spread too thinly, and suggested that the number of nanotechnology centres was likely to be substantially pruned. To my knowledge, none of these centres has received further funding. In designing the next phase of the government’s translational research centres – a new network of Technology and Innovation Centres, loosely modelled on the German Fraunhofer centres, it seems that the MNT program has been regarded as a cautionary tale of how not to do things, rather than an example to build on, and nanotechnology in itself will play little part in these new centres (though, of course, it may well be an enabling technology for things like a regenerative medicine).

2. There has been no significant support for nanotechnology from the kinds of companies and industries that government listens to. This is partly because the UK is now weak in those industrial sectors that would be expected to be most interested in nanotechnology, such as the chemicals industry and the electronics industry. Large national champions in these sectors with the power to influence government, in the way that now-defunct conglomerates like ICI and GEC did in the past, are particularly lacking. Companies selling directly to consumers, in the food and personal care sectors, have been cautious about being too closely involved in nanotechnology for fear of a consumer backlash. The pharmaceutical industry, which is still strong in the UK, has other serious problems to deal with, so nanotechnology has been, for them, a second order issue. And the performance of small, start-up companies based on nanotechnology, such as Oxonica, has been disappointing. The effect of this was brought home to me in March 2010, when I met the then Science Minister, Lord Drayson, to discuss on behalf of the Royal Society the shortcomings of the latest UK Nanotechnology Strategy. To paraphrase his response, he said he knew the strategy was poor, but that was the fault of the nanotechnology community, which had not been able to get its act together to convince the government it really was important. He contrasted this with the space industry, which had been able to make what to him was a very convincing case for its importance.

3. The constant criticism that the government was receiving about its slow response to issues of the safety and environmental impact of nanotechnology was, I am sure, a source of irritation. The reasons for this slow response were structural, related to the erosion of support for strategic science within government (as opposed to the kind of investigator led science funded by the research councils – see this blogpost on the subject from Jack Stilgoe), but in this environment civil servants might be forgiven for thinking that this issue had more downside than upside.

4. Within the scientific community, there were few for whom the idea of nanotechnology was their primary loyalty. After the financial crisis, when it was clear that big public spending cuts were likely and their were fears of very substantial cuts in science budgets, it was natural for scientists either to lobby on behalf of their primary disciplines or to emphasise the direct application of their work to existing industries with strong connections to government, like the pharmaceutical and aerospace industries. In this climate, the more diffuse idea of nanotechnology slipped down a gap.

Does it matter that, in the UK, nanotechnology is no longer a significant element of science and innovation policy? On one level, one could argue that it doesn’t. Just because nanotechnology isn’t an important category by which science is classified by, this doesn’t mean that the science that would formerly have been so classified doesn’t get done. We will still see excellent work being supported in areas like semiconductor nanotechnology for optoelectronics, plastic electronics, nano-enabled drug delivery and DNA nanotech, to give just a few examples. But there will be opportunities missed to promote interdisciplinary science, and I think this really does matter. In straitened times, there’s a dangerous tendency for research organisations to retreat to core business, to single disciplines, and we’re starting to see this happening now to some extent. Interdisciplinary, goal-oriented science is still being supported through the societal themes, like the programs in energy and ageing, and it’s going to be increasingly important that these themes do indeed succeed in mobilising the best scientists from different areas to work together.

But I worry that it very much does matter that the UK’s efforts at translating nanotechnology research into new products and new businesses has not been more successful. But this is part of a larger problem. The UK has, for the last thirty years, not only not had an industrial policy to speak of, it has had a policy of not having an industrial policy. But the last three years have revealed the shortcomings of this, as we realise that we aren’t any more going to be able to rely on a combination of North Sea oil and the ephemeral virtual profits of the financial services industry to keep the country afloat

Three things that Synthetic Biology should learn from Nanotechnology

I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

1. Mind that metaphor
Metaphors in science are powerful and useful things, but they come with two dangers:
a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

2. Blowing bubbles in the economy of promises

Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

3. It’s not about risk, it’s about trust

The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.

The next twenty-five years

The Observer ran a feature today collecting predictions for the next twenty five years from commentators about politics, science, technology and culture. I contributed a short piece on nanotechnology: I’m not expecting a singularity. Here’s what I wrote:

Twenty years ago Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product virtually for free, to have computers so powerful that they will surpass human intelligence, and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish aging and death.

I don’t think Kurzweil’s “technological singularity” – a dream of scientific transcendence which echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and “the rapture of the nerds”. But nanotechnology will lead to some genuinely transformative new applications.

New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we’re used to in the developed world. We’ll learn more about intervening in our biology at the sub-cellular level, and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, like Alzheimer’s, that will increasingly afflict our population as it ages. The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another twenty five years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines, and unashamedly focused on applications rather than fundamental understanding. Given the huge resources being directed towards nanotechnology in China and its neighbours, this may be the first major technology of the modern era that is predominantly developed outside the USA and Europe.

What does it mean to be a responsible nanoscientist?

This is the pre-edited version of an article first published in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be found here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, the European Commission recommended a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another recently issued code the UK government’s Universal Ethical Code for Scientists (PDF) – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.