What does it mean to be a responsible nanoscientist?

This is the pre-edited version of an article first published in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be found here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, the European Commission recommended a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another recently issued code the UK government’s Universal Ethical Code for Scientists (PDF) – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.

David Willetts on Science and Society

The UK’s Minister for Science and Higher Education, David Willetts, made his first official speech about science at the RI on 9 July 2010. What everyone is desperate to know is how big a cut the science budget will take. Willetts can’t answer this yet, but the background position isn’t good. We know that the budget of his department – Business, Innovation and Skills – will be cut by somewhere between 25%-33%. Science accounts for about 15% of this budget, with Universities accounting for another 29% (not counting the cost of student loans and grants, which accounts for another 27%). So, there’s not going to be a lot of room to protect spending on science and on research in Universities.

Having said this, this is a very interesting speech, in that Willetts takes some very clear positions on a number of issues related to science and innovation and their relationship to society, some of which are rather different from views in government before. I met Willetts earlier in the year, and then he said a couple of things then that struck me. He said that there was nothing in science policy that couldn’t be illuminated by looking at history. He mentioned in particular “The Shock of the Old”, by David Edgerton (which I’ve previously discussed here), and I noticed that at the RS meeting after the election he referred very approvingly to David Landes’s book “The Wealth and Poverty of Nations”. More personally, he referred with pride to his own family origins as Birmingham craftsmen, and he clearly knows the story of the Lunar Society well. His own academic background is as a social scientist, so it would be to be expected that he’d have some well-developed views about science and society. Here’s how I gloss the relevant parts of his speech.

More broadly, as society becomes more diverse and cultural traditions increasingly fractured, I see the scientific way of thinking – empiricism – becoming more and more important for binding us together. Increasingly, we have to abide by John Rawls’s standard for public reason – justifying a particular position by arguments that people from different moral or political backgrounds can accept. And coalition, I believe, is good for government and for science, given the premium now attached to reason and evidence.

The American political philosopher John Rawls was very concerned about how, in a pluralistic society, one could agree on a common set of moral norms. He rejected the idea that you could construct morality on entirely scientific grounds, as consequentialist ethical systems like utilitarianism try to, instead looking for a principles based morality; but he recognised that this was problematic in a society where Catholics, Methodists, Atheists and Muslims all had their different sets of principles. Hence the idea of trying to find moral principles that everyone in society can agree on, even though the grounds on which they approve of these principles may differ from group to group. In a coalition uniting parties including people as different as Evan Harris and Philippa Stroud one can see why Willetts might want to call in Rawls for help.

The connection to science is an interesting one, that draws on a particular reading of the development of the empirical tradition. According, for example, to Schaffer and Shapin (in their book “Leviathan and the Air Pump”) one of the main aims of the Royal Society in its early days was to develop a way of talking about philosophy – based on experiment and empiricism, rather than doctrine – that didn’t evoke the clashing religious ideologies that had been the cause of the bloody religious wars of the seventeenth century. According to this view (championed by Robert Boyle), in experimental philosophy one should refrain entirely from talking about contentious issues like religion, restricting oneself entirely to discussion of what one measures in experiments that are open to be observed and reproduced by anyone.

You might say that science is doing so well in the public sphere that the greatest risks it faces are complacency and arrogance. Crude reductionism puts people off.

I wonder if he’s thinking of the current breed of scientific atheists like Richard Dawkins?

Scientists can morph from admired public luminaries into public enemies, as debates over nuclear power and GM made clear. And yet I remain optimistic here too. The UK Research Councils had the foresight to hold a public dialogue about ramifications of synthetic biology ahead of Craig Venter developing the first cell controlled by synthetic DNA. This dialogue showed that there is conditional public support for synthetic biology. There is great enthusiasm for the possibilities associated with this field, but also fears about controlling it and the potential for misuse; there are concerns about impacts on health and the environment. We would do well to remember this comment from a participant: “Why do they want to do it? … Is it because they will be the first person to do it? Is it because they just can’t wait? What are they going to gain from it? … [T]he fact that you can take something that’s natural and produce fuel, great – but what is the bad side of it? What else is it going to do?” Synthetic biology must not go the way of GM. It must retain public trust. That means understanding that fellow citizens have their worries and concerns which cannot just be dismissed.

This is a significant passage which seems to accept two important features of some current thinking about public engagement with science. Firstly, that it should be “upstream” – addressing areas of science, like synthetic biology, for which concrete applications have yet to emerge, and indeed in advance of signficant scientific breakthroughs like Venter’s “synthetic cell”. Secondly, it accepts that the engagement should be two-way, that the concerns of the public may well be legitimate and should be taken seriously, and that these concerns go beyond simple calculations of risk.

The other significant aspect of Willetts’s speech was a wholesale rejection of the “linear model” of science and innovation, but this needs another post to discuss in detail.

Whose goals should direct goal-directed research?

I’ve taken part in panel discussions at two events with a strong Science and Technology Studies flavour in the last couple of months. “Democratising Futures” was a meeting under the auspices of the Centre for Research in Arts, Social Sciences and Humanities, at Cambridge on 27 May 2010. The Science and Democracy Network’s meeting was held in association with the Royal Society at the Kavli Centre on the 29 June 2010. What follows is a composite of the sorts of things I said at the two meetings.

“There is no alternative” is a phrase with a particular resonance in British politics, but it also expresses a way of thinking about the progress of science and technology. To many people, science and technology represent an autonomous force, driven forward by its own internal logic. In this view, the progress of science and technology cannot effectively steered, much less restrained. I think this view is both wrong and pernicious.

The reality is that there are very many places in which decisions and choices are made about the directions of science and technology. These include the implicit decisions made by the (international) scientific community, as a result of which the fashionable and timely topics of the day acquire momentum, much more explicit choices made by funding agencies in what areas they attach funding priority to, as well as preferences expressed by a variety of actors in the private sector, whether those are the beliefs that inform investment decisions by venture capitalists or the strategic decisions made by multinational companies. It’s obvious that these decisions are not always informed by perfect information and rationality – they will blend informed but necessarily fallible judgements about how the future might unfold with sectional interests, and will be underpinned by ideology.

To take an example which I don’t think is untypical, in the funding body I know best, the UK’s Engineering and Physical Sciences Research Council (EPSRC), priorities are set by a mixture of top-down and bottom-up pressures. The bottom-up aspect comes from the proposals the council receives, from individual scientists, to pursue those lines of research that they think are interesting. From the top, though, comes increasing pressure from government to prioritise research in line with their broad strategies.

In setting a strategic framework, EPSRC distinguishes between the technical opportunities that the current state of science offers, and the demands of “users” of research in industry and research. Advice on the former typically comes from practising scientists, who alone have the expertise to know what is possible. This advice won’t be completely objective, of course – it will be subject to the whims of academic fashion and a certain incumbency bias in favour of established, well-developed fields. The industrial scientists who provide advice will of course have a direct interest in science that benefits their own industries and their own companies. Policy demands supporting science that can be translated into the marketplace, but this needs to be balanced against a reluctance to subsidise the private sector directly. Even accepting the desirability of supporting science that can be taken to market quickly, there is also an incumbency bias here too. Given that this advice necessarily comes from people representing established concerns, who is going to promote the truly disruptive industries?

So, given these routes by which scientists and industry representatives have explicit mechanisms for influencing the agenda and priorities for publicly funded science, the big outstanding question is how the rest of the population can have some influence. Of course, research councils are aware of the broader societal contexts that surround the research they fund; and the scientists and industry people providing advice will be asked to incorporate these broader issues in their thinking. The danger is that these people are not well equipped to make some judgements. In a phrase of Arie Rip, it’s likely that they will be using “folk social science” – a set of preconceptions and prejudices, unsupported by evidence, about what the wider population thinks about science and technology (one very common example of this in the UK is the proposition that one can gauge probable public reactions to science by reading the Daily Mail).

It might be argued that the proper way for wider societal and ethical issues to be incorporated in scientific priority setting is through the usual apparatus of representative democracy – in the UK system, through Ministers who are responsible to Parliament. This fails in practise for both institutional and practical reasons. There is a formal principle in the UK known as the Haldane principle (like much else in the UK this is probably an invented tradition), which states that science should be governed at one remove from government, with decisions being left to scientists. The funding bodies – the research councils – are not direct subsidiaries of their parent government department, but are free-standing agencies. This doesn’t stop them from being given a strong strategic steer, both through informal and formal routes, but they generally resist taking direct orders from the Minister. But there are more general reasons why science resists democratic oversight through traditional mechanisms – it is at once too big and too little an issue. The long timescales of science and the convoluted routes by which it impacts on everyday life; the poor understanding of science on the part of elected politicians; the lack of immediate feedback from the electorate in the politicians’ postbags – all these factors contribute to science not having a high political profile, despite the deep and fundamental impacts it has on the way people live.

Here, then, is the potential role of public engagement – it should form a key input into identifying what potential goals of science and technology might have broad societal support. It was in recognition of these sorts of issues that EPSRC introduced a Societal Issues Panel into its advisory structure – this a high-level strategic advice panel on a par with the Technical Opportunities Panel and the User Panel.

Another development in the way people are thinking about scientific priority setting makes these issues even more pointed – this is the growing popularity across the world of the idea of the “Grand Challenge” as a way of organising science. Here, we have an explicit link being made between scientific priorities and societal goals – which leads directly to the question “whose goals?”

Grand Challenges provide a way of contextualising research that goes beyond a rather sterile dichotomy between “applied” and “blue sky” research – it supports work that has some goal in mind, but a goal that is more distant than the typical object of applied research, and is often on a larger scale. The “challenge” or context is typically based on some larger societal goal, rather than on a question arising from a scientific discipline. This might be a global problem, such as the need to develop a low carbon energy infrastructure or ensure food security for a growing population, or something that is more local to a particular country or group of countries, such as the problems of ageing populations in the UK and other developed countries. The definition in terms of a societal goal necessarily implies that the work needs to be cross-disciplinary in character, and there is growing recognition in principle of the importance of social sciences.

An example of the way in which public engagement could help steer such a grand challenge programme was given by the EPSRC’s recent Grand Challenge in Nanotechnology for Medicine and Healthcare. Here, a public engagement exercise was designed with the explicit intention of using what emerged as an input, together with expert advice from academic scientists, clinicians and industry representatives, into a decision about how to shape the priorities of the programme.

I’ve written in more detail about this process elsewhere. Here, it’s worth stressing what made this programme particularly suitable for this approach. The proposed research was framed explicitly as a search for technological responses to societal issues, so it was easy to argue that public attitudes and priorities were an important factor to consider. The area is also strongly interdisciplinary; this makes the traditional approaches of relying solely on expert advice less effective. Very few, if any, individual scientists have expertise that crosses the range of disciplines that is necessary to operate in the field of nanomedicine, so technical advice needs to integrate the contributions of people expert in areas as different as colloid chemistry and neuroscience, for example.

The outcome of the public engagement provided rich insights that in some cases surprised the expert advisors. These insights included both specific commentaries on the proposed areas of research that were being considered (such as the use of nanotechnology enabled surfaces to control pathogens) and a more general filter – the idea that a key issue in deciding people’s response to a proposed technology was the degree to which it gave or took away control and empowerment from the individual. Of course, people were concerned about issues of risk and regulation, but the form of the engagement was such that much broader questions than the simple question “is it safe” were discussed.

I believe that this public engagement was very successful, because it concerned a rather concrete and tightly defined technology area, it was explicitly linked to a pending funding decision, and there was complete clarity about how it would contribute, together with more conventional consultations, to that decision – that is, what kind of applications of nanotechnology to medicine and healthcare a forthcoming funding call would prioritise. Of course, there are still many open questions about using public engagement more widely in this sort of priority setting.

The first issue is the question of scope – at what level does one ask the question? For example, in the area of energy research, one could ask, should we have a programme of energy research, and if so how big? Or, taking the answer to that question as given, one could ask whether research in biofuels should form a part of the energy programme? Or one could ask what kind of biofuel should we prioritise. My experience from a variety of public engagement exercises in the area of nanotechnology is that the more specific the question, the easier it is for people to engage with the process. But the criticism of focusing public engagement down in this way is that one can be accused, by focusing on the details, of taking the answers to the big questions as read.

But the big questions are fundamentally questions of politics in its proper sense. They are questions about what sort of world we want to live in and what kinds of lives we want to lead. The inescapable conclusion, for me, is that the explicit linkage of science and this kind of politics – the politics of big questions about society’s future – is both inevitable and desirable.

Many scientists will instinctively recoil from this this enmeshing of science and politics. I think this is a mistake. It is less controversial to say we need more science in politics – since so many of the big issues we face have a scientific dimension, most people agree that decisions on these issues need to be informed by science. But we also need to recognise that we need more explicit recognition of the political dimensions of science – because the science we do has such potential to shape the way our society will change, we need positive visions of those changes to steer the way science develops. So, we need more science in politics, and more politics in science. And, when it comes to it, we probably need more politics in politics too.

In addition to these more fundamental questions, there are some very practical linked issues related to the scale of the engagement exercises one does, their methodological robustness, and their cost. Social scientists can contribute a great deal to understanding how to make them as reliable as possible, but I believe that a certain pragmatism is called for when one considers their inevitable methodological shortcomings – they need to be seen as one input into a decision making process that already falls short of perfection. This is inevitable; it is expensive in money and time to do these exercises properly. The UK research councils seem to have settled down to an informal understanding that they will do one or two of these exercises a year on topics that seem to the be most potentially controversial. Following the nanomedicine dialogue, there have been recently completed exercises on synthetic biology and geo-engineering. But we will see how strong the will is to continue in this way in an environment with much less money around.

In addition to practical difficulties, there are people who oppose in principle any use of public engagement in setting scientific priorities. One can identify perhaps three classes of objections. The first will come from those scientists who oppose any infringement of the sovereignty of the “independent republic of science”. The second can be heard from some politicians, who regard the use of direct public engagement as an infringement of the principles of representative democracy. The third will come from free market purists, who will insist that the market provides the route by which informal, non-scientific knowledge is incorporated in decisions about how technology is developed. I don’t think any of these objections is tenable, but that’s the subject for a much longer discussion.