Archive for the ‘Social and economic aspects of nanotechnology’ Category

On Singularities, mathematical and metaphorical

Saturday, June 20th, 2015

Transhumanists look forward to a technological singularity, which we should expect to take place on or around 2045, if Ray Kurzweil is to be relied on. The technological singularity is described as something akin to an event horizon, a date at which technological growth becomes so rapid that to look beyond it becomes quite unknowable to us mere cis-humans. In some versions this is correlated with the time when, due to the inexorable advance of Moore’s Law, machine intelligence surpasses human intelligence and goes into a recursive cycle of self-improvement.

The original idea of the technological singularity is usually credited to the science fiction writer Vernor Vinge, though earlier antecedents can be found, for example in the writing of the British Marxist scientist J.D. Bernal. Even amongst transhumanists and singularitarianists there are different views about what might be meant by the singularity, but I don’t want to explore those here. Instead, I note this – when we talk of the technological singularity we’re using a metaphor, a metaphor borrowed from mathematics and physics. It’s the Singularity as a metaphor that I want to probe in this post.

A real singularity happens in a mathematical function, where for some value of the argument the result of the function is undefined. So a function like 1/(t-t0), as t gets closer and closer to t0, takes a larger and larger value until when t=t0, the result is infinite. Kurzweil’s thinking about technological advance revolves around the idea of exponential growth, as exemplified by Moore’s Law, so it’s worth making the obvious point that an exponential function doesn’t have a singularity. An exponentially growing function – exp(t/T) – certainly gets larger as t gets larger, and indeed the absolute rate of increase goes up too, but this function never becomes infinite for any finite t.

An exponential function is, of course, what you get when you have a constant fractional growth rate – if you charge your engineers to make your machine or device 20% better every year, for as long as they are successful in meeting their annual target you will get exponential growth. To get a technological singularity from a Moore’s law-like acceleration of technology, the fractional rate of technological improvement must itself be increasing in time (let me leave aside for the moment my often expressed conviction that technology isn’t single thing, and that it makes no sense at all to imagine that there’s some simple scalar variable that can be used to describe “technological progress” in general).

It isn’t totally implausible that something like this should happen – after all, we use technology to develop more technology. Faster computers should help us design more powerful microprocessors. On the other hand, as the components of our microprocessors shrink, the technical problems we have to overcome to develop the technology themselves grow more intractable. The question is, do our more powerful tools outstrip the greater difficulty of our outstanding tasks? The past has certainly seen periods in which the rate of technological progress has undergone periods of acceleration, due to the recursive, self-reinforcing effects of technological and social innovation. This is one way of reading the history of the first industrial revolution, of course – but the industrial revolution wasn’t a singularity, because the increase of the rate of change wasn’t sustained, it merely settled down at a higher value. What isn’t at all clear is whether what is happening now corresponds even to a one-off increase in the rate of change, let alone the sustained and limitless increase in rate of change that is needed to produce a mathematical singularity. The hope or fear of singularitarians is that this is about to change through the development of true artificial intelligence. We shall see.

Singularities occur in physics too. Or, to be more precise, they occur in the theories that physicists use. When we ask physics to calculate the self-energy of an electron, say, or the structure of space-time at the centre of a black hole, we end up with mathematical bad behaviour, singularities in the mathematics of the theories we are using. Does this mathematical bad behaviour correspond to bad behaviour in the physical world, or is it simply alerting us to the shortcomings of our understanding of that physical world? Do we really see infinity in the singularity – or is it just a signal to say we need different physics? Some argue it’s the latter, and here’s an example from my own field to illustrate why one might think that.
The great physicist Sam Edwards (who died a month ago) made his name and founded the branch of physics I’ve worked in, by realising that you could describe the statistical mechanics of polymer molecules with a theory that had the formal structure of the quantum field theories he himself learnt as a postdoc with Julian Schwinger.

Like those quantum field theories, Edwards’s theories of macromolecules produce some inconvenient, and unphysical, infinities that one has to work around. To Edwards, this was not a worry at all – as he was quoted as saying, “I know there are atoms down there, but I don’t care”. Edwards’s theories treated polymer molecules as wiggly worms that are wiggly on all scales, no matter how small. This works brilliantly if you want to know what’s happening on scales larger than the size of individual atoms, but it’s the existence of those very atoms that mean the polymer isn’t wiggly all the way down, as it were. So we don’t worry that the theory doesn’t work at scales smaller than atoms, and we know what the different physics is that we’d need to use to understand behaviour on those scales. In the quantum field theories that describe electrons and other sub-atomic particles, one might suspect that there’s some similar graininess that intervenes to save us from the bad mathematical behaviour of our theories, but we don’t yet know what new kind of theory might be needed below the Planck scale, where we think the graininess might set in.

The most notorious singularities in physics are the ones that are predicted to occur in the middle of black holes – here it is the equations of general relativity that predict divergent behaviour in the structure of space-time itself. But like other singularities in physics, what the mathematical singularity is signalling to us is that near the singularity, we have different physics, physics that we don’t yet understand. In this case the unknown is the physics of quantum gravity, where quantum mechanics meets general relativity. The singularity at the centre of a black hole is a double mystery; not only do we not understand what the new physics might be, but the phenomena of this physical singularity are literally unobservable, hidden by the event horizon which prevents us from seeing inside the black hole. The new physics beyond the Planck scale is unobservable, too, but for a different, less fundamental reason – the particle accelerators that we’d need to probe it would have to be unfeasibly huge in scale and energy, huge on scales that seem unattainable to humans with our current earth-bound constraints. Is it always a given that physical singularities are unobservable? Naked singularities are difficult to imagine, but don’t seem to be completely ruled out.

The biggest singularity in physics of all is the singularity where we think it all began – the Big Bang, a singularity in time which it is unimaginable to see through, just as the end of the universe in a big crunch provides a singularity in time which we can’t conceive of seeing beyond. Now we enter the territory of thinking about the creation of the universe and the ultimate end of the world, which of course have long been rich themes for religious speculation. This connects us back to the conception of a technologically driven singularity in human history, as a discontinuity in the quality of human experience and the character of human nature. I’ve already argued at length that this conception of the technological singularity is a metaphor that owes a great deal to these religious forbears.

So here we’re back at the metaphorical singularity – and perhaps metaphors are best left to creative writers. If we want a profound treatment of the metaphors of singularity, we should look, not to futurists, but to science fiction. I know of no more thought-provoking treatment of singularities and the singularity than that of M. John Harrison in his brilliant trilogy, “Light”, “Nova Swing” and “Empty Space”.

At the astrophysical centre of the trilogy is a vast, naked singularity. Bits of this drop off onto nearby planets, leading to ragged borders beyond which things are familiar but weirdly distorted, a ragged edge across which one can with some risk move back and forth, and which is crossed and recrossed by herds of inscrutable cats. The human narrative crosses back and forth between a near-present and a further future which feels very much post-singularity. This future is characterised by routine faster-than-light travel, “shadow operators” – disembodied pieces of code which find unexplained, nanobot like substrates to run on, radical and cheap genetic engineering leading to widespread, wholesale (and indeed retail) human modification. There is a fully realised nano-medicine, and widely available direct brain interfaces, one application of which turns humans into the cyborg controllers of the highest performing faster-than-light spaceships. And yet, the motivations that persuade a young girl to sign up to this irreversible transformation seem all too recognisable, and indeed the familiarity of this post-singularity world seems all too plausible.

Beyond the singularities, beyond the space opera setting and Harrison’s brilliant and stylish writing, the core of the trilogy concerns the ways people construct, and reconstruct, and sometimes fabricate, their own identities. It’s this theme that is claimed by transhumanism, but it’s one that seems to me to be very much more universal than that.

Lecture on responsible innovation and the irresponsibility of not innovating

Tuesday, November 4th, 2014

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.

Your mind will not be uploaded

Sunday, September 14th, 2014

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. (more…)

Transhumanism has never been modern

Sunday, August 24th, 2014

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

Transhumanism is an ideology, a movement, or a belief system, which predicts and looks forward to a future in which an increasing integration of technology with human beings leads to a qualititative, and positive, change in human nature. (more…)

The economics of innovation stagnation

Saturday, May 3rd, 2014

What would an advanced economy look like if technological innovation began to dry up? Economic growth would begin to slow, and we’d expect the shortage of opportunities for new, lucrative investments to lead to a period of persistently lower rates of return on capital. The prices of existing income-yielding assets would rise, and as wealth-holders hunted out increasingly rare higher yielding investment opportunities we’d expect to see a series of asset price bubbles. As truly transformative technologies became rarer, when new technologies did come along we might see them being associated with hype and inflated expectations. Perhaps we’d also begin to see growing inequality, as a less dynamic economy cemented the advantages of the already wealthy and gave fewer opportunities to talented outsiders. It’s a picture, perhaps, that begins to remind us of the characteristics of the developed economies now – difficulties summed up in the phrase “secular stagnation”. Could it be that, despite the widespread belief that technology continues to accelerate, that innovation stagnation, at least in part, underlies some of our current economic difficulties?

G7 Real GDP per capita plot
Growth in real GDP per person across the G7 nations. GDP data and predictions from the IMF World Economic Outlook 2014 database, population estimates from the UN World Population prospects 2012. The solid line is the best fit to the 1980 – 2008 data of a logistic function of the form A/(1+exp(-(T-T0)/B)); the dotted line represents constant annual growth of 2.6%.

The data is clear that growth in the richest economies of the world, the economies operating at the technological leading edge, was slowing down even before the recent financial crisis. (more…)

New Dawn Fades?

Wednesday, April 23rd, 2014

Before K. Eric Drexler devised and proselytised for his particular, visionary, version of nanotechnology, he was an enthusiast for space colonisation, closely associated with another, older, visionary for a that hypothetical technology – the Princeton physicist Gerard O’Neill. A recent book by historian Patrick McCray – The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future – follows this story, setting its origins in the context of its times, and argues that O’Neill and Drexler are archetypes of a distinctive type of actor at the interface between science and public policy – the “Visioneers” of the title. McCray’s visioneers are scientifically credentialed and frame their arguments in technical terms, but they stand at some distance from the science and engineering mainstream, and attract widespread, enthusiastic – and sometimes adulatory – support from broader mass movements, which sometimes take their ideas in directions that the visioneers themselves may not always endorse or welcome.

It’s an attractive and sympathetic book, with many insights about the driving forces which led people to construct these optimistic visions of the future. (more…)

Why isn’t the UK the centre of the organic electronics industry?

Monday, November 12th, 2012

In February 1989, Jeremy Burroughes, at that time a postdoc in the research group of Richard Friend and Donal Bradley at Cambridge, noticed that a diode structure he’d made from the semiconducting polymer PPV glowed when a current was passed through it. This wasn’t the first time that interesting optoelectronic properties had been observed in an organic semiconductor, but it’s fair to say that it was the resulting Nature paper, which has now been cited more than 8000 times, that really launched the field of organic electronics. The company that they founded to exploit this discovery, Cambridge Display Technology, was floated on the NASDAQ in 2004 at a valuation of $230 million. Now organic electronics is becoming mainstream; a popular mobile phone, the Samsung Galaxy S, has an organic light emitting diode screen, and further mass market products are expected in the next few years. But these products will be made in factories in Japan, Korea and Taiwan; Cambridge Display Technology is now a wholly owned subsidiary of the Japanese chemical company Sumitomo. How is it that despite an apparently insurmountable academic lead in the field, and a successful history of University spin-outs, that the UK is likely to end up at best a peripheral player in this new industry? (more…)

Responsible innovation – some lessons from nanotechnology

Friday, October 19th, 2012

A few weeks ago I gave a lecture at the University of Nottingham to a mixed audience of nanoscientists and science and technology studies scholars with the title “Responsible innovation – some lessons from nanotechnology”. The lecture was recorded, and the audio can be downloaded, together with the slides, from the Nottingham STS website.

Some of the material I talked about is covered in my chapter in the recent book Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies. A preprint of the chapter can be downloaded here: What has nanotechnology taught us about contemporary technoscience?”

Can plastic solar cells deliver?

Sunday, November 13th, 2011

The promise of polymer solar cells is that they will be cheap enough and produced on a large enough scale to transform our energy economy, unlocking the sun’s potential to meet all our energy needs in a sustainable way. But there’s a long way to go from a device in a laboratory, or even a company’s demonstrator product, to an economically viable product that can be made at scale. How big is that gap, are there insuperable obstacles standing in the way, and if not, how long might it take us to get there? Some answers to these questions are now beginning to emerge, and I’m cautiously optimistic. Although most attention is focused on efficiency, the biggest outstanding technical issue is to prolong the lifetime of the solar cells. But before plastic solar cells can be introduced on a mass scale, it’s going to be necessary to find a substitute for indium tin oxide as a transparent electrode. But if we can do this, the way is open for a real transformation of our energy system.

The obstacles are both technical and economic – but of course it doesn’t make sense to consider these separately, since it is technical improvements that will make the economics look better. A recent study starts to break down the likely costs and identify where we need to find improvements. The paper – Economic assessment of solar electricity production from organic-based photovoltaic modules in a domestic environment, by Brian Azzopardi, from Manchester University, with coworkers from Imperial College, Cartagena, and Riso (Energy and Environmental Science 4 p3741, 2011) – breaks down an estimate of the cost of power generated by a polymer photovoltaic fabricated on a plastic substrate by a manufacturing process already at the prototype stage. This process uses the most common combination of materials – the polymer P3HT together with the fullerene derivative PCBM. The so-called “levelised power cost” – i.e. the cost per unit of electricity, including all capital costs, averaged over the lifetime of the plant, comes in between €0.19 and €0.50 per kWh for 7% efficient solar cells with a lifetime of 5 years, assuming southern European sunshine. This is, of course, too expensive both compared to alternatives like fossil fuel or nuclear energy, and to conventional solar cells, though the gap with conventional solar isn’t massive. But the technology is still immature, so what improvements in performance and reductions in cost is it reasonable to expect?

The two key technical parameters are efficiency and lifetime. Most research effort so far has concentrated on improving efficiencies – values greater than 4% are now routine for the P3HT/PCBM system; a newer system, involving a different fullerene derivative, PC70BM blended with the polymer PCDTBT (I find even the acronym difficult to remember, but for the record the full name is poly[9’-hepta-decanyl-2,7- carbazole-alt-5,5-(4’,7’-di-2-thienyl-2’,1’,3’-benzothiadiazole)]), achieves efficiencies greater than 6%. These values will improve, through further tweaking of the materials and processes. Azzopardi’s analysis suggests that efficiencies in the range 7-10% may already be looking viable… as long as the cells last long enough. This is potentially a problem – it’s been understood for a while that the lifetime of polymer solar cells may well prove to be their undoing. The active materials in polymer solar cells – conjugated polymer semiconductors – are essentially overgrown dyes, and we all know that dyes tend to bleach in the sun. Five years seems to be a minimum lifetime to make this a viable technology, but up to now many laboratory devices have struggled to last more than a few days. Another recent paper, however, gives grounds for more optimism. This paper – High Efficiency Polymer Solar Cells with Long Operating Lifetimes, Advanced Energy Materials 1 p491, 2011), from the Stanford group of Michael McGehee – demonstrates a PCDTBT/PC70BM solar cell with a lifetime of nearly seven years. This doesn’t mean all our problems are solved, though – this device was encapsulated in glass, rather than printed on a flexible plastic sheet. Glass is much better than plastics at keeping harmful oxygen away from the active materials; to reproduce this lifetime in an all-plastic device will need more work to improve the oxygen barrier properties of the module.

How does the cost of a plastic solar cell break down, and what reductions is it realistic to expect? The analysis by Azzopardi and coworkers shows that the cost of the system is dominated by the cost of the modules, and the cost of the modules is dominated by the cost of the materials. The other elements of the system cost will probably continue to decrease anyway, as much of this is shared in common with other types of solar cells. What we don’t know yet is the extent to which the special advantages of plastic solar cells over conventional ones – their lightness and flexibility – can reduce the installation costs. As we’ve been expecting, the cheapness of processing plastic solar cells means that manufacturing costs – including the capital costs of the equipment to make them – are small compared to the cost of materials. The cost of these materials make up 60-80% of the cost of the modules. Part of this is simply the cost of the semiconducting polymers; these will certainly reduce with time as experience grows at making them at scale. But the surprise for me is the importance of the cost of the substrate, or more accurately the cost of the thin, transparent conducting electrode which coats the substrate – this represents up to half of the total cost of materials. This is going to be a real barrier to the large scale uptake of this technology.

The transparent electrode currently used is a thin layer of indium tin oxide – ITO. This is a very widely used material in touch screens and liquid crystal displays, and it currently represents the major use of the metal indium, which is rare and expensive. So unless a replacement for ITO can be found, it’s the cost and availability of this material that’s going to limit the use of plastic solar cells. Transparency and electrical conductivity don’t usually go together, so it’s not straightforward to find a substitute. Carbon nanotubes, and more recently graphene, have been suggested, but currently they’re neither good enough by themselves, nor is there a process to make them cheaply at scale (a good summary of the current contenders can be found in Rational Design of Hybrid Graphene Films for High-Performance Transparent Electrodes by Zhu et al, ACS Nano 5 p6472, 2011). So, to make this technology work, much more effort needs to be put into finding a substitute for ITO.

Are you a responsible nanoscientist?

Monday, October 17th, 2011

This is the pre-edited version of a piece which appeared in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be viewed here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, we saw the European Commission recommend a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the UK-based Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another code – the UK government’s Universal Ethical Code for Scientists – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often feel rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.