Archive for the ‘Social and economic aspects of nanotechnology’ Category

Why has the UK given up on nanotechnology?

Sunday, July 17th, 2011

In a recent roundup of nanotechnology activity across the world, the consultancy Cientifica puts the UK’s activity pretty much at the bottom of the class. Is this a fair reflection of the actual situation? Comparing R&D numbers across countries is always difficult, because of the different institutional arrangements and different ways spending is categorised; but, broadly, this feels about right. Currently, the UK has no actual on-going nanotechnology program. Activity continues in projects that are already established, but the current plans for government science spending in the period 2011- 2015, as laid out in the various research council documents, reveal no future role for nanotechnology. The previous cross-council program “Nanoscience engineering through application” has been dropped; all the cross-council programmes now directly reflect societal themes such as “ageing population, environmental change, global security, energy, food security and the digital economy”. The delivery plan for the Engineering and Physical Science Research Council, previously the lead council for nanotechnology, does not even mention the word, while the latest strategy document for the Technology Strategy Board, responsible for nearer-market R&D support, notes in a footnote that nanotechnology is “now embedded in all themes where there are such opportunities”.

So, why has the UK given up on nanotechnology? I suggest four reasons.

1. The previous government’s flagship nanotechnology program – the network of Micro- and Nano- Technology centres (the MNT program) is perceived as having failed. This program was launched in 2003, with initial funding of £90 million, a figure which subsequently was intended to rise to £200 million. But last July, the new science minister, David Willetts, giving evidence to the House of Commons Science and Technology Select Committee, picked on nanotechnology as an area in which funding had been spread too thinly, and suggested that the number of nanotechnology centres was likely to be substantially pruned. To my knowledge, none of these centres has received further funding. In designing the next phase of the government’s translational research centres – a new network of Technology and Innovation Centres, loosely modelled on the German Fraunhofer centres, it seems that the MNT program has been regarded as a cautionary tale of how not to do things, rather than an example to build on, and nanotechnology in itself will play little part in these new centres (though, of course, it may well be an enabling technology for things like a regenerative medicine).

2. There has been no significant support for nanotechnology from the kinds of companies and industries that government listens to. This is partly because the UK is now weak in those industrial sectors that would be expected to be most interested in nanotechnology, such as the chemicals industry and the electronics industry. Large national champions in these sectors with the power to influence government, in the way that now-defunct conglomerates like ICI and GEC did in the past, are particularly lacking. Companies selling directly to consumers, in the food and personal care sectors, have been cautious about being too closely involved in nanotechnology for fear of a consumer backlash. The pharmaceutical industry, which is still strong in the UK, has other serious problems to deal with, so nanotechnology has been, for them, a second order issue. And the performance of small, start-up companies based on nanotechnology, such as Oxonica, has been disappointing. The effect of this was brought home to me in March 2010, when I met the then Science Minister, Lord Drayson, to discuss on behalf of the Royal Society the shortcomings of the latest UK Nanotechnology Strategy. To paraphrase his response, he said he knew the strategy was poor, but that was the fault of the nanotechnology community, which had not been able to get its act together to convince the government it really was important. He contrasted this with the space industry, which had been able to make what to him was a very convincing case for its importance.

3. The constant criticism that the government was receiving about its slow response to issues of the safety and environmental impact of nanotechnology was, I am sure, a source of irritation. The reasons for this slow response were structural, related to the erosion of support for strategic science within government (as opposed to the kind of investigator led science funded by the research councils – see this blogpost on the subject from Jack Stilgoe), but in this environment civil servants might be forgiven for thinking that this issue had more downside than upside.

4. Within the scientific community, there were few for whom the idea of nanotechnology was their primary loyalty. After the financial crisis, when it was clear that big public spending cuts were likely and their were fears of very substantial cuts in science budgets, it was natural for scientists either to lobby on behalf of their primary disciplines or to emphasise the direct application of their work to existing industries with strong connections to government, like the pharmaceutical and aerospace industries. In this climate, the more diffuse idea of nanotechnology slipped down a gap.

Does it matter that, in the UK, nanotechnology is no longer a significant element of science and innovation policy? On one level, one could argue that it doesn’t. Just because nanotechnology isn’t an important category by which science is classified by, this doesn’t mean that the science that would formerly have been so classified doesn’t get done. We will still see excellent work being supported in areas like semiconductor nanotechnology for optoelectronics, plastic electronics, nano-enabled drug delivery and DNA nanotech, to give just a few examples. But there will be opportunities missed to promote interdisciplinary science, and I think this really does matter. In straitened times, there’s a dangerous tendency for research organisations to retreat to core business, to single disciplines, and we’re starting to see this happening now to some extent. Interdisciplinary, goal-oriented science is still being supported through the societal themes, like the programs in energy and ageing, and it’s going to be increasingly important that these themes do indeed succeed in mobilising the best scientists from different areas to work together.

But I worry that it very much does matter that the UK’s efforts at translating nanotechnology research into new products and new businesses has not been more successful. But this is part of a larger problem. The UK has, for the last thirty years, not only not had an industrial policy to speak of, it has had a policy of not having an industrial policy. But the last three years have revealed the shortcomings of this, as we realise that we aren’t any more going to be able to rely on a combination of North Sea oil and the ephemeral virtual profits of the financial services industry to keep the country afloat

Three things that Synthetic Biology should learn from Nanotechnology

Friday, April 15th, 2011

I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

1. Mind that metaphor
Metaphors in science are powerful and useful things, but they come with two dangers:
a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

2. Blowing bubbles in the economy of promises

Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

3. It’s not about risk, it’s about trust

The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.

The next twenty-five years

Sunday, January 2nd, 2011

The Observer ran a feature today collecting predictions for the next twenty five years from commentators about politics, science, technology and culture. I contributed a short piece on nanotechnology: I’m not expecting a singularity. Here’s what I wrote:

Twenty years ago Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product virtually for free, to have computers so powerful that they will surpass human intelligence, and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish aging and death.

I don’t think Kurzweil’s “technological singularity” – a dream of scientific transcendence which echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and “the rapture of the nerds”. But nanotechnology will lead to some genuinely transformative new applications.

New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we’re used to in the developed world. We’ll learn more about intervening in our biology at the sub-cellular level, and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, like Alzheimer’s, that will increasingly afflict our population as it ages. The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another twenty five years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines, and unashamedly focused on applications rather than fundamental understanding. Given the huge resources being directed towards nanotechnology in China and its neighbours, this may be the first major technology of the modern era that is predominantly developed outside the USA and Europe.

What does it mean to be a responsible nanoscientist?

Saturday, July 31st, 2010

This is the pre-edited version of an article first published in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be found here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, the European Commission recommended a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another recently issued code the UK government’s Universal Ethical Code for Scientists (PDF) – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.

Feynman, Drexler, and the National Nanotechnology Initiative

Tuesday, January 12th, 2010

It’s fifty years since Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom”, and this has been the signal for a number of articles reflecting on its significance. This lecture has achieved mythic importance in discussions of nanotechnology; to many, it is nothing less than the foundation of the field. This myth has been critically examined by Chris Tuomey (see this earlier post), who finds that the significance of the lecture is something that’s been attached retrospectively, rather than being apparent as serious efforts in nanotechnology got underway.

There’s another narrative, though, that is popular with followers of Eric Drexler. According to this story, Feynman laid out in his lecture a coherent vision of a radical new technology; Drexler popularised this vision and gave it the name “nanotechnology”. Then, inspired by Drexler’s vision, the US government launched the National Nanotechnology Initiative. This was then hijacked by chemists and materials scientists, whose work had nothing to do with the radical vision. In this way, funding which had been obtained on the basis of the expansive promises of “molecular manufacturing”, the Feynman vision as popularized by Drexler, has been used to research useful but essentially mundane products like stain resistant trousers and germicidal washing machines. To add insult to injury, the material scientists who had so successfully hijacked the funds then went on to belittle and ridicule Drexler and his theories. A recent article in the Wall Street Journal – “Feynman and the Futurists” – by Adam Keiper, is written from this standpoint, in a piece that Drexler himself has expressed satisfaction with on his own blog. I think this account is misleading at almost every point; the reality is both more complex and more interesting.

To begin with, Feynman’s lecture didn’t present a coherent vision at all; instead it was an imaginative but disparate set of ideas linked only by the idea of control on a small scale. I discussed this in my article in the December issue of Nature Nanotechnology – Feynman’s unfinished business (subscription required), and for more details see this series of earlier posts on Soft Machines (Re-reading Feynman Part 1, Part 2, Part 3).

Of the ideas dealt with in “Plenty of Room”, some have already come to pass and have indeed proved economically and societally transformative. These include the idea of writing on very small scales, which underlies modern IT, and the idea of making layered materials with precisely controlled layer thicknesses on the atomic scale, which was realised in techniques like molecular beam epitaxy and CVD, whose results you see every time you use a white light emitting diode or a solid state laser of the kind your DVD contains. I think there were two ideas in the lecture that did contribute to the vision popularized by Drexler – the idea of “a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on”, and, linked to this, the idea of doing chemical synthesis by physical processes. The latter idea has been realised at proof of principle level by the idea of doing chemical reactions using a scanning tunnelling microscope; there’s been a lot of work in this direction since Don Eigler’s demonstration of STM control of single atoms, no doubt some of it funded by the much-maligned NNI, but so far I think it’s fair to say this approach has turned out so far to be more technically difficult and less useful (on foreseeable timescales) than people anticipated.

Strangely, the second part of the fable, which talks about Drexler popularising the Feynman vision, I think actually underestimates the originality of Drexler’s own contribution. The arguments that Drexler made in support of his radical vision of nanotechnology drew extensively on biology, an area that Feynman had touched on only very superficially. What’s striking if one re-reads Drexler’s original PNAS article and indeed Engines of Creation is how biologically inspired the vision is – the models he looks to are the protein and nucleic acid based machines of cell biology, like the ribosome. In Drexler’s writing now (see, for example, this recent entry on his blog), this biological inspiration is very much to the fore; he’s looking to the DNA-based nanotechnology of Ned Seeman, Paul Rothemund and others as the exemplar of the way forward to fully functional, atomic scale machines and devices. This work is building on the self-assembly paradigm that has been such a big part of academic work in nanotechnology around the world.

There’s an important missing link between the biological inspiration of ribosomes and molecular motors and the vision of “tiny factories”- the scaled down mechanical engineering familiar from the simulations of atom-based cogs and gears from Drexler and his followers. What wasn’t fully recognised until after Drexler’s original work, was that the fundamental operating principles of biological machines are quite different from the rules that govern macroscopic machines, simply because the way physics works in water at the nanoscale is quite different to the way it works in our familiar macroworld. I’ve argued at length on this blog, in my book “Soft Machines”, and elsewhere (see, for example, “Right and Wrong Lessons from Biology”) that this means the lessons one should draw from biological machines should be rather different to the ones Drexler originally drew.

There is one final point that’s worth making. From the perspective of Washington-based writers like Kepier, one can understand that there is a focus on the interactions between academic scientists and business people in the USA, Drexler and his followers, and the machinations of the US Congress. But, from the point of view of the wider world, this is a rather parochial perspective. I’d estimate that somewhere between a quarter and a third of the nanotechnology in the world is being done in the USA. Perhaps for the first time in recent years a major new technology is largely being developed outside the USA, in Europe to some extent, but with an unprecedented leading role being taken in places like China, Korea and Japan. In these places the “nanotech schism” that seems so important in the USA simply isn’t relevant; people are just pressing on to where the technology leads them.

Why and how should governments fund basic research?

Wednesday, December 2nd, 2009

Yesterday I took part in a Policy Lab at the Royal Society, on the theme The public nature of science – Why and how should governments fund basic research? I responded to a presentation by Professor Helga Nowotny, the Vice-President of the European Research Council, saying something like the following:

My apologies to Helga, but my comments are going to be rather UK-centric, though I hope they illustrate some of the wider points she’s made.

This is a febrile time in British science policy.

We have an obsession amongst both the research councils and the HE funding bodies with the idea of impact – how can we define and measure the impact that research has on wider society? While these bodies are at pains to define impact widely, involving better policy outcomes, improvements in quality of life and broader culture, there is much suspicion that all that really counts is economic impact.

We have had a number of years in which the case that science produces direct and measurable effects on economic growth and jobs has been made very strongly, and has been rewarded by sustained increases in public science spending. There is a sense that these arguments are no longer as convincing as they were a few years ago, at least for the people in Treasury who are going to be making the crucial spending decisions at a time of fiscal stringency. As Helga argues, the relationship between economic growth in the short term, at a country level, and spending on scientific R&D is shaky, at best.

And in response to these developments, we have a deep unhappiness amongst the scientific community at what’s perceived as a shift from pure, curiosity driven, blue skies research into research and development.

What should our response to this be?

One response is to up the pressure on scientists to deliver economic benefits. This, to some extent, is what’s happening in the UK. One problem with this approach is that It probably overstates the importance of basic science in the innovation system. Scientists aren’t the only people who are innovators – innovation takes place in industry, in the public sector, it can involve customers and users too. Maybe our innovation system does need fixing, but it’s not obvious what needs most attention is what scientists do. But certainly, we should look at ways to open up the laboratory, as Helga puts it, and to look at the broader institutional and educational preconditions that allow science-based innovation to flourish.

Another response is to argue that the products of free scientific inquiry have intrinsic societal worth, and should be supported “as an ornament to civilisation”. Science is like the opera, something we support because we are civilised. One trouble with this argument is that it involves a certain degree of personal taste – I dislike opera greatly, and who’s to say that others won’t have the same feeling about astronomy? An even more serious argument is that we don’t actually support the arts that much, in financial terms, in comparison to the science budget. On this argument we’d be employing a lot fewer scientists than we are now (and probably paying them less).

A third response is to emphasise science’s role in solving the problems of society, but emphasising the long-term nature of this project. The idea is to direct science towards broad societal goals. Of course, as soon as one has said this one has to ask “whose goals?” – that’s why public engagement, and indeed politics in the most general sense, becomes important. In Helga’s words, we need to “recontextualise” science for current times. It’s important to stress that, in this kind of “Grand Challenge” driven science, one should specify a problem – not a solution. It is important, as well, to think clearly about different time scales, to put in place possibilities for the long term as well as responding to the short term imperative.

For example, the problem of moving to low-carbon energy sources is top of everyone’s list of grand challenges. We’re seeing some consensus (albeit not a very enthusiastic one) around the immediate need to build new nuclear power stations, to implement carbon capture and storage and to expand wind power, and research is certainly needed to support this, for example to reduce the high cost and energy overheads of carbon capture and storage. But it’s important to recognise that many of these solutions will be at best stop-gap, interim solutions, and to make sure we’re putting the research in place to enable solutions that will be sustainable for the long-term. We don’t know, at the moment, what these solutions will be. Perhaps fusion will finally deliver, maybe a new generation of cellulosic biofuels will have a role, perhaps (as my personal view favours) large scale cheap photovoltaics will be the solution. It’s important to keep the possibilities open.

So, this kind of societally directed, “Grand challenge”, inspired research isn’t necessarily short term, applied research, and although the practicalities of production and scale-up need to integrated at an early stage, it’s not necessarily driven by industry. It needs to preserve a diversity of approaches, to be robust in the face of our inevitable uncertainty.

One of Helga’s contributions to the understanding of modern techno-science has been the idea of “mode II knowledge production”, which she defined in an influential book with Michael Gibbons and others. In this new kind of science, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability.

This idea has been controversial. I think many people accept this represents the direction of travel of recent science. What’s at issue is whether it is a good thing; Helga and her colleagues have been at pains to stress that their work is purely descriptive, and implies no judgement of the desirability of these changes. But many of my colleagues in academic science think they are very definitely undesirable (see my earlier post Mode 2 and its discontents). One interesting point, though, is that in arguing against more directed ways of managing science, many people point to the many very valuable discoveries that have been serendipitously in the course of undirected, investigator driven research. Examples are manifold, from lasers to giant magneto-resistance, to restrict the examples to physics. It’s worth noting, though, that while this is often made as an argument against so-called “instrumental” science, it actually appeals to instrumental values. If you make this argument, you are already conceding that the purpose of science is to yield progress towards economic or political goals; you are simply arguing about the best way to organise science to achieve those goals.

Not that we should think this new. In the manifestos for modern science, written by Francis Bacon, that were so important in defining the mission of this society at its foundation three hundred and fifty years ago, the goal of science is defined as “an improvement in man’s estate and an enlargement of his power over nature”. This was a very clear contextualisation of science for the seventeenth century; perhaps our recontextualisation of science for the 21st century won’t prove so very different.

Easing the transition to a new solar economy

Sunday, November 8th, 2009

In the run up to the Copenhagen conference, a UK broadcaster has been soliciting opinions from scientists in response to the question “Which idea, policy or technology do you think holds the greatest promise or could deliver the greatest benefit for addressing climate change?”. Here’s the answer given by myself and my colleague Tony Ryan.”

We think the single most important idea about climate change is the optimistic one, that, given global will and a lot of effort to develop the truly sustainable technologies we need, we could emerge from some difficult years to a much more positive future, in which a stable global population lives prosperously and sustainably, supported by the ample energy resources of the sun.

We know this is possible in principle, because the total energy arriving on the planet every day from the sun far exceeds any projection of what energy we might need, even if the earth’s whole population enjoys the standard of living that we in the developed world take for granted.

Our problem is that, since the industrial revolution, we have become dependent on energy in a highly concentrated form, from burning fossil fuels. It’s this that has led, not just to our prosperity in the developed world, but to our very ability to feed the world at its current population levels. Before the industrial revolution, the limits on the population were set by the sun and by the productivity of the land; fossil fuels broke that connection (initially through mechanisation and distribution which led to a small increase in population, but in the last century by allowing us to greatly increase agricultural yields using nitrogen fertilizers made by the highly energy intensive Haber-Bosch process). Now we see that the last three hundred years have been a historical anomaly, powered by fossil fuels in a way that can’t continue. But we can’t go back to pre-industrial ways without mass starvation and a global disaster

So the new technologies we need are those that will allow us to collect, concentrate, store and distribute energy derived from the sun with greater efficiency, and on a much bigger scale, than we have at the moment. These will include new types of solar cells that can be made in very much bigger areas – in hectares and square kilometers, rather than the square meters we have now. We’ll need improvements in crops and agricultural technologies allowing us to grow more food and perhaps to use alternative algal crops in marginal environments for sustainable biofuels, without the need to bring a great deal of extra land into cultivation. And we’ll need new ways of moving energy around and storing it. Working technologies for renewable energies exist now; what’s important to understand is the problem of scale – they simply cannot be deployed on a big enough scale in a short enough time to fill our needs, and the needs of large and fast developing countries like India and China, for plentiful energy in a concentrated form. That’s why new science and new technology is urgently needed to develop these technologies.

This development will take time – with will and urgency, perhaps by 2030 we might see significant progress to a world powered by renewable, sustainable energy. In the meantime, the climate crisis becomes urgent. That’s why we need interim technologies, that already exist in prototype, that will allow us to cross the bridge to the new sunshine powered world. These technologies need development if they aren’t themselves going to store up problems for the future – we need to make carbon capture and storage affordable, and to implement a new generation of nuclear power plants that maximise reliability and minimise waste, and we need to learn how to use the energy we have more efficiently.

The situation we are in is urgent, but not hopeless; there is a positive goal worth striving for. But it will need more than modest lifestyle changes and policy shifts to get there; we need new science and new technology, developed not in the spirit of a naive attempt to implement a “technological fix”, but accompanied by a deep understanding of the world’s social and economic realities.

A crisis of trust?

Wednesday, September 30th, 2009

One sometimes hears it said that there’s a “crisis of trust in science” in the UK, though this seems to be based on impressions rather than evidence. So it’s interesting to see the latest in an annual series of opinion polls comparing the degree of public trust in various professional groups. The polls, carried out by Ipsos Mori, are commissioned by the Royal College of Physicians, who naturally welcome the news that, yet again, doctors are the most trusted profession, with 92% of those polled saying they would trust doctors to tell the truth. But, for all the talk of a crisis of trust in science, scientists as a profession don’t do so badly, either, with 70% of respondents trusting scientists to tell the truth. To put this in context, the professions at the bottom of the table, politics and journalism, are trusted by only 13% and 22% respectively.

The figure below puts this information in some kind of historical context. Since this type of survey began, in 1983, there’s been a remarkable consistency – doctors are at the top of the trust league, journalists and politicians vie for the bottom place, and scientists emerge in the top half. But there does seem to be a small but systematic upward trend for the proportion trusting both doctors and scientists. A headline that would be entirely sustainable on these figures would be “Trust in scientists close to all time high”.

One wrinkle that it would be interesting to see explored more is the fact that there are some overlapping categories here. Professors score higher than scientists for trust, despite the fact that many scientists are themselves professors (me included). Presumably this reflects the fact that people lump together in this category scientists who work directly for government and for industry together with academic scientists; it’s a reasonable guess that the degree to which the public trusts scientists varies according to who they work for. One feature in this set of figures that does interest me is the relatively high degree of trust attached to civil servants, in comparison to the very low levels of trust in politicians. It seems slightly paradoxical that people trust the people who operate the machinery of government more than they trust those entrusted to oversee it on behalf of the people, but this does emphasise that there is by no means a generalised crisis of trust in our institutions in general; instead we see a rather specific failure of trust in politics and journalism, and to a slightly lesser extent business.

Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians.

Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians. Click on the plot for a larger version.

Moral hazard and geo-engineering

Sunday, September 6th, 2009

Over the last year of financial instability, we’ve heard a lot about moral hazard. This term originally arose in the insurance industry; there it refers to the suggestion that if people are insured against some negative outcome, they may be more liable to behave in ways that increase the risk of that negative outcome arising. So, if your car is insured for all kinds of accident damage, you might be tempted to drive that bit more recklessly, knowing that you won’t have to pay for all the consequences of an accident. In the last year, it’s been all too apparent that the banking system has seen more that its fair share of recklessness, and here the role of moral hazard seems pretty clear – why should one worry about the possibility of a lucrative bet going sour when you think that the taxpayer will bail out your bank, if it’s in danger of going under? The importance of the concept of moral hazard in financial matters is obvious, but it may also be useful when we’re thinking about technological choices.

This issue is raised rather clearly in a report released last week by the UK’s national science academy, the Royal Society – Geoengineering the climate: science, governance and uncertainty. This is an excellent report, but judging by the way it’s been covered in the news, it’s in danger of pleasing no-one. Those environmentalists who regard any discussion of geo-engineering at all as anathema will be dismayed that the idea is gaining any traction at all (and this point of view is not at all out of the mainstream, as this commentary from the science editor the Financial Times shows). Techno-optimists, on the other hand, will be impatient with the obvious serious reservations that the report has about the prospect of geo-engineering. The strongest endorsement of geo-engineering that the report makes is that we should think of it as a plan B, an insurance policy in case serious reductions in CO2 emission don’t prove possible. But, if investigating geo-engineering is an insurance policy, the report asks, won’t it subject us to the precise problem of moral hazard?

Unquestionably, people unwilling to confront the need for the world to make serious reductions to CO2 emissions will take comfort in the idea that geo-engineering might offer another way of mitigating dangerous climate change; in this sense the parallel with moral hazard in insurance and banking is exact. There are parallels in the potential catastrophic consequences of this moral hazard, as well. It’s likely that the largest costs won’t fall on the people who benefit most from the behaviour that’s encouraged by the belief that geo-engineering will be able to save them from the worst consequences of their actions. And in the event of the insurance policy being needed, it may not be able to pay out – the geo-engineering methods available may not end up being sufficient to avert disaster (and, indeed, through unanticipated consequences may make matters worse). On the other hand, the report wonders whether seeing geo-engineering being taken seriously might have the opposite effect – convincing some people that if such drastic measures are being contemplated, then urgent action to reduce emissions really is needed. I can’t say I’m hugely convinced by this last argument.

Food nanotechnology – their Lordships deliberate

Tuesday, June 30th, 2009

Today I found myself once again in Westminster, giving evidence to a House of Lords Select Committee, which is currently carrying out an inquiry into the use of nanotechnology in food. Readers not familiar with the intricacies of the British constitution need to know that the House of Lords is one of the branches of Parliament, the UK legislature, with powers to revise and scrutinise legislation, and through its select committees, hold the executive to account. Originally its membership was drawn from the hereditary peerage, with a few bishops thrown in; recently as part of a slightly ramshackle program of constitutional reform the influence of the hereditaries has been much reduced, with the majority of the chamber being made up of members appointed for life by the government. These are drawn from former politicians and others prominent in public life. Whatever the shortcomings of this system from the democratic point of view, it does mean that the membership includes some very well informed people. This inquiry, for example, is being chaired by Lord Krebs, a very distinguished scientist who previously chaired the Food Standards Agency.

All the evidence submitted to the committee is publicly available on their website; this includes submissions from NGOs, Industry Organisations, scientific organisations and individual scientists. There’s a lot of material there, but together it’s actually a pretty good overview of all sides of the debate. I’m looking forward to seeing their Lordships’ final report.