Natural nanomaterials in food – the strange case of foie gras

Here’s a footnote to my commentary on the House of Lords nanofood report and the government response to it. There’s a recommendation (14, para 5.32) that the definition of nanomaterials for regulatory purposes should exclude nanomaterials created from natural food substances, with which the government agrees. I accept that this distinction is a practical necessity, and I would go along with the report’s paragraph 5.31: “We acknowledge that nanomaterials created from naturally-occurring materials may pose a potential risk to human health. However, we also recognise also that it is impractical to include all natural nanomaterials present in food under the Novel Foods Regulation, and that many natural nanoscale substances have been consumed for many years with no ill effects reported.”

But I do think it is important to contest the general assertion that things that are natural are by definition harmless. There’s a long tradition of using food processing techniques to render harmless naturally occurring toxins, whether that’s simply the cooking process needed to destroy toxic lectins in kidney beans, or the more elaborate procedures needed to make some tropical tubers, like cassava, safe to eat.

There’s been a recent report of a situation where a potential link between eating naturally formed nanomaterials and disease has been identified. The nanomaterials in question are amyloid fibrils – nanoscale fibrous aggregates of misfolded proteins, of a kind that have been associated with a number of diseases, notably Alzheimer’s disease and Creutzfeldt-Jacob disease (see this earlier post for an overview of the good and bad sides of these materials – Death, life, and amyloids).

In a paper published in Proceedings of the National Academy of Sciences a couple of years ago, Solomon et al (2007) showed that foie gras – the highly fatty livers of force-fed geese – contains nanofibers (amyloid fibrils) of amyloid A protein, which when fed to mice susceptible to AA amyloidosis lead to the development of that disease.

AA amyloidosis is a chronic, incurable condition often associated with rheumatoid arthritis. The suggestion is that, if AA amyloid fibrils enter the body, they will act as “seeds” to nucleate the conversion of more AA protein into the amyloid form. A more speculative suggestion is that AA fibrils could also nucleate the formation of amyloid fibrils by other susceptible proteins, leading to other kinds of amyloid diseases. The authors of the paper draw the conclusion that “it may be hazardous for individuals who are prone to develop other types of amyloid-associated disorders, e.g., Alzheimer’s disease or type II diabetes, to consume such products” (i.e. ones contaminated with amyloid protein A fibrils). It seems to me that it is stretching what the data shows too far to come to this conclusion at the moment, but it’s an area that would bear closer investigation.

Nanotechnologies and food – the UK government responds to the House of Lords

Last week the UK government issued its response to a report on nanotechnologies and food from the House of Lords Select Committee on Science and Technology, which was published on 8 January this year.

The headlines arising from this report concentrated on a perceived lack of openness from the food industry (see for example, the BBC’s headline Food industry ‘too secretive’ over nanotechnology), and it’s this aspect that the consumer group Which? concentrates on in its reaction to the response – Government ignores call for openness about nano food. This refers to House of Lords recommendation 10, which calls for the establishment by the Food Standards Agency of a mandatory, but confidential, database of those nanomaterials being researched by the food industry. The government has rejected the proposal that this should be mandatory, on the grounds that this would drive research away from the UK. However, the government has accepted the recommendation (26) that the FSA maintains a publicly accessible list of food and food packaging products that contain nanomaterials. This will include, as recommended, all materials that have been approved by the European Food Safety Authority, but the FSA will explore including other materials that might be considered to have nanoscale elements, to allow for the uncertainties of definition of nanomaterials in the food context. Where their Lordships and the government agree (and disagree with Which?) is in rejecting the idea of compulsory labelling of nanomaterials on packaging.

The House of Lords report, together with all the written and oral evidence submitted to the inquiry, can be downloaded from here. For my own written evidence, see here – I mentioned my oral evidence in this blog post from last year.

Responsible innovation still needs innovation

The UK government’s policies for nanotechnology seem to unfold in a predictable and cyclical way – some august body releases a weighty report criticising some aspect of policy, the government responds in a way that disappoints the critics, until the cycle is repeated with another critical report and a further response. The process began with the Royal Society/ Royal Academy of Engineering report in 2004, and several cycles on, last week we saw a new comprehensive government Nanotechnology Strategy launched (downloadable, if you’re lucky, from this somewhat flakey website). One might have thought that this process of dialectic might, by now, have led to a compelling and robust strategy, but that doesn’t seem to be the case here.

The immediate prompt for the strategy this time was the Royal Commission on Environmental Pollution (RCEP) report ‘Novel materials in the environment: the case of nanotechnology’, from 2008 (see Deja view all over again for my view of that report). As its title suggests, that report had much to say about the potential risks posed by nanomaterials in the environment; it also had some rather interesting general points to make about the problems of regulating new technologies in the face of inevitable uncertainty. Unfortunately, it’s the former rather than the latter that dominates the new Nanotechnology Strategy. Having been criticised so much, ever since the Royal Society/Royal Academy of Engineering report, about the lack of action on the possibility of nanoparticle toxicity, it is defensiveness about this issue that dominates this strategy. Even then, the focus is narrowly on toxicology, missing yet again the important broader issues around life-cycle analysis that will determine the circumstances and extent of potential human exposure to nanomaterials.

Moving to the section on business, the stated aim is to have a transparent, integrated, responsible and skilled nanotechnologies industry. I can’t argue with transparent, responsible and skilled, but I wonder whether there’s an inherent contradiction in the idea of an integrated nanotechnologies industry. Maybe the clue as to why the industry is fragmented is in this phrase; the report talks about nanotechnologies, recognising that there are many technologies contained within this category, and it lists a dozen or more markets and sectors in which these technologies are being applied. Given that both the technologies and the markets are so diverse, why would one expect an integrated industry, or even think that is desirable?

The arm of government charged with promoting technological innovation in business and industry is the Technology Strategy Board (TSB), an agency of government which has an arms-length relationship with its sponsoring department, Business, Innovation and Skills. The TSB published its own strategy on nanotechnology last year – Nanoscale Technologies Strategy 2009-2012 (PDF here), and the discussion in the Nanotechnology Strategy draws extensively on this. This makes clear that TSB doesn’t really regard nanotechnology as something to be supported in itself – instead, they expect nanotechnology to contribute, where appropriate, to their challenge-led funding programs – the Fighting Infection through Detection competition is cited as a good example. One very visible funding initiative that TSB is responsible for, that is focused on nano- (and micro-) technologies, is the network of MNT capital facilities (though it should be noted that TSB only inherited this program, which was initiated in the late Department of Trade and Industry before the TSB was formed). It now seems that these facilities will receive little or no dedicated funding in the future; instead they will have to bid for project funding in open competition. There’s a hint that there might be an exception to this. Nanomedicine is an area identified for future investment, and this comment is tantalisingly juxtaposed to a reference to a forthcoming report to BIS from the prominent venture capitalist Hermann Hauser, which is expected to recommend (in a report due out today) that the government funds a handful of centres for translational research, modelled on the German Fraunhofer Institutes. I think it is fair to say, on the basis of reading this and the TSB Nanoscale Technologies Strategy, that TSB is at best ambivalent in its belief in a nanotechnology industry, looking instead for important applications of nanotechnology in a whole variety of different application areas.

The largest chunk of government funding going to nanotechnology in the UK – probably in the region of £40-50 million a year – comes through the research councils, and here the Nanotechnology Strategy is at its weakest. The lead agency for nanotechnology is the Engineering and Physical Sciences Research Council (EPSRC), and the only initiatives that are mentioned are ones that have already been launched, as part of the minimum fulfillment of the EPSRC’s most recent nanotechnology strategy, published in 2006 (available here as a Word document). It looks like the Research Councils UK priority theme Nanoscale Science: Engineering through Application has run its course, and nanotechnology funding from the research councils in the future will have to come either from standard, responsive mode proposals or as part of the other mission programmes, such as Sustainable Energy Systems, Ageing: lifelong health and wellbeing, or the widely trailed new priority theme Resilient Economy.

Essentially, then, with the exception of a possible new TSB-led initiative in nanomedicine, it looks like there will be no further targeted interventions specifically for nanotechnology in the UK. For this reason, the section in the strategy on public engagement is particularly unsatisfying. We’ve seen a growing consensus about public engagement with science in the UK, which is simply not reflected in this strategy. This is that public engagement mustn’t simply be seen as a way of securing public acquiescence to new technology; instead it should be a genuine dialogue which aims to ensure that innovation is directed at widely accepted societal goals, carried out “upstream”, in the word introduced in an influential report some years ago. But without some upstream innovation to engage with, you can’t have upstream engagement.

Feynman, Drexler, and the National Nanotechnology Initiative

It’s fifty years since Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom”, and this has been the signal for a number of articles reflecting on its significance. This lecture has achieved mythic importance in discussions of nanotechnology; to many, it is nothing less than the foundation of the field. This myth has been critically examined by Chris Tuomey (see this earlier post), who finds that the significance of the lecture is something that’s been attached retrospectively, rather than being apparent as serious efforts in nanotechnology got underway.

There’s another narrative, though, that is popular with followers of Eric Drexler. According to this story, Feynman laid out in his lecture a coherent vision of a radical new technology; Drexler popularised this vision and gave it the name “nanotechnology”. Then, inspired by Drexler’s vision, the US government launched the National Nanotechnology Initiative. This was then hijacked by chemists and materials scientists, whose work had nothing to do with the radical vision. In this way, funding which had been obtained on the basis of the expansive promises of “molecular manufacturing”, the Feynman vision as popularized by Drexler, has been used to research useful but essentially mundane products like stain resistant trousers and germicidal washing machines. To add insult to injury, the material scientists who had so successfully hijacked the funds then went on to belittle and ridicule Drexler and his theories. A recent article in the Wall Street Journal – “Feynman and the Futurists” – by Adam Keiper, is written from this standpoint, in a piece that Drexler himself has expressed satisfaction with on his own blog. I think this account is misleading at almost every point; the reality is both more complex and more interesting.

To begin with, Feynman’s lecture didn’t present a coherent vision at all; instead it was an imaginative but disparate set of ideas linked only by the idea of control on a small scale. I discussed this in my article in the December issue of Nature Nanotechnology – Feynman’s unfinished business (subscription required), and for more details see this series of earlier posts on Soft Machines (Re-reading Feynman Part 1, Part 2, Part 3).

Of the ideas dealt with in “Plenty of Room”, some have already come to pass and have indeed proved economically and societally transformative. These include the idea of writing on very small scales, which underlies modern IT, and the idea of making layered materials with precisely controlled layer thicknesses on the atomic scale, which was realised in techniques like molecular beam epitaxy and CVD, whose results you see every time you use a white light emitting diode or a solid state laser of the kind your DVD contains. I think there were two ideas in the lecture that did contribute to the vision popularized by Drexler – the idea of “a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on”, and, linked to this, the idea of doing chemical synthesis by physical processes. The latter idea has been realised at proof of principle level by the idea of doing chemical reactions using a scanning tunnelling microscope; there’s been a lot of work in this direction since Don Eigler’s demonstration of STM control of single atoms, no doubt some of it funded by the much-maligned NNI, but so far I think it’s fair to say this approach has turned out so far to be more technically difficult and less useful (on foreseeable timescales) than people anticipated.

Strangely, the second part of the fable, which talks about Drexler popularising the Feynman vision, I think actually underestimates the originality of Drexler’s own contribution. The arguments that Drexler made in support of his radical vision of nanotechnology drew extensively on biology, an area that Feynman had touched on only very superficially. What’s striking if one re-reads Drexler’s original PNAS article and indeed Engines of Creation is how biologically inspired the vision is – the models he looks to are the protein and nucleic acid based machines of cell biology, like the ribosome. In Drexler’s writing now (see, for example, this recent entry on his blog), this biological inspiration is very much to the fore; he’s looking to the DNA-based nanotechnology of Ned Seeman, Paul Rothemund and others as the exemplar of the way forward to fully functional, atomic scale machines and devices. This work is building on the self-assembly paradigm that has been such a big part of academic work in nanotechnology around the world.

There’s an important missing link between the biological inspiration of ribosomes and molecular motors and the vision of “tiny factories”- the scaled down mechanical engineering familiar from the simulations of atom-based cogs and gears from Drexler and his followers. What wasn’t fully recognised until after Drexler’s original work, was that the fundamental operating principles of biological machines are quite different from the rules that govern macroscopic machines, simply because the way physics works in water at the nanoscale is quite different to the way it works in our familiar macroworld. I’ve argued at length on this blog, in my book “Soft Machines”, and elsewhere (see, for example, “Right and Wrong Lessons from Biology”) that this means the lessons one should draw from biological machines should be rather different to the ones Drexler originally drew.

There is one final point that’s worth making. From the perspective of Washington-based writers like Kepier, one can understand that there is a focus on the interactions between academic scientists and business people in the USA, Drexler and his followers, and the machinations of the US Congress. But, from the point of view of the wider world, this is a rather parochial perspective. I’d estimate that somewhere between a quarter and a third of the nanotechnology in the world is being done in the USA. Perhaps for the first time in recent years a major new technology is largely being developed outside the USA, in Europe to some extent, but with an unprecedented leading role being taken in places like China, Korea and Japan. In these places the “nanotech schism” that seems so important in the USA simply isn’t relevant; people are just pressing on to where the technology leads them.

Happy New Year

Here are a couple of nice nano-images for the New Year. The first depicts a nanoscale metal-oxide donut, whose synthesis is reported in a paper (abstract, subscription required for full article) in this week’s Science Magazine. The paper, whose first author is Haralampos Miras, comes from the group of Lee Cronin at the University of Glasgow. The object is made by templated self-assembly of molybdenum oxide units; the interesting feature here is that the cluster which forms the template for the ring – the “hole” around which the donut forms – forms as a precursor during the process before being ejected from the ring once it is formed.

A molybdenum oxide nanowheel templated on a transient cluster.  From Miras et al, Science 327 p 72 (2010).
A molybdenum oxide nanowheel templated on a transient cluster. From Miras et al, Science 327 p 72 (2010).

The second image depicts the stages in reconstructing a high resolution electron micrograph of a self-assembled tetrahedron made from DNA. In an earlier blog post I described how Russell Goodman, a grad student in the group of Andrew Turberfield at Oxford, was able to make rigid tetrahedra of DNA less than 10 nm in size. Now, in collaboration with Takayuki Kato and Keiichi Namba group at Osaka University, they have been able to obtain remarkable electron micrographs of these structures. The work was published last summer in an article in Nano Letters (subscription required). The figure shows, from left to right, the predicted structure, a raw micrograph obtained from cryo-TEM (transmission electron microscopy on frozen sections), a micrograph processed to enhance its contrast, and two three dimensional image reconstructions obtained from a large number of such images. The sharpest image, on the right, is at a 12 Å resolution, and it is believed that this is the smallest object, natural or artificial, that has been imaged using cryo-TEM at this resolution, which is good enough to distinguish between the major and minor grooves of the DNA helices that form the struts of the tetrahedron.

Cryo-TEM reconstruction of DNA tetrahedron
Cryo-TEM reconstruction of DNA tetrahedron, from Kato et al., Nano Letters, 9, p2747 (2009)

A happy New Year to all readers.

Why and how should governments fund basic research?

Yesterday I took part in a Policy Lab at the Royal Society, on the theme The public nature of science – Why and how should governments fund basic research? I responded to a presentation by Professor Helga Nowotny, the Vice-President of the European Research Council, saying something like the following:

My apologies to Helga, but my comments are going to be rather UK-centric, though I hope they illustrate some of the wider points she’s made.

This is a febrile time in British science policy.

We have an obsession amongst both the research councils and the HE funding bodies with the idea of impact – how can we define and measure the impact that research has on wider society? While these bodies are at pains to define impact widely, involving better policy outcomes, improvements in quality of life and broader culture, there is much suspicion that all that really counts is economic impact.

We have had a number of years in which the case that science produces direct and measurable effects on economic growth and jobs has been made very strongly, and has been rewarded by sustained increases in public science spending. There is a sense that these arguments are no longer as convincing as they were a few years ago, at least for the people in Treasury who are going to be making the crucial spending decisions at a time of fiscal stringency. As Helga argues, the relationship between economic growth in the short term, at a country level, and spending on scientific R&D is shaky, at best.

And in response to these developments, we have a deep unhappiness amongst the scientific community at what’s perceived as a shift from pure, curiosity driven, blue skies research into research and development.

What should our response to this be?

One response is to up the pressure on scientists to deliver economic benefits. This, to some extent, is what’s happening in the UK. One problem with this approach is that It probably overstates the importance of basic science in the innovation system. Scientists aren’t the only people who are innovators – innovation takes place in industry, in the public sector, it can involve customers and users too. Maybe our innovation system does need fixing, but it’s not obvious what needs most attention is what scientists do. But certainly, we should look at ways to open up the laboratory, as Helga puts it, and to look at the broader institutional and educational preconditions that allow science-based innovation to flourish.

Another response is to argue that the products of free scientific inquiry have intrinsic societal worth, and should be supported “as an ornament to civilisation”. Science is like the opera, something we support because we are civilised. One trouble with this argument is that it involves a certain degree of personal taste – I dislike opera greatly, and who’s to say that others won’t have the same feeling about astronomy? An even more serious argument is that we don’t actually support the arts that much, in financial terms, in comparison to the science budget. On this argument we’d be employing a lot fewer scientists than we are now (and probably paying them less).

A third response is to emphasise science’s role in solving the problems of society, but emphasising the long-term nature of this project. The idea is to direct science towards broad societal goals. Of course, as soon as one has said this one has to ask “whose goals?” – that’s why public engagement, and indeed politics in the most general sense, becomes important. In Helga’s words, we need to “recontextualise” science for current times. It’s important to stress that, in this kind of “Grand Challenge” driven science, one should specify a problem – not a solution. It is important, as well, to think clearly about different time scales, to put in place possibilities for the long term as well as responding to the short term imperative.

For example, the problem of moving to low-carbon energy sources is top of everyone’s list of grand challenges. We’re seeing some consensus (albeit not a very enthusiastic one) around the immediate need to build new nuclear power stations, to implement carbon capture and storage and to expand wind power, and research is certainly needed to support this, for example to reduce the high cost and energy overheads of carbon capture and storage. But it’s important to recognise that many of these solutions will be at best stop-gap, interim solutions, and to make sure we’re putting the research in place to enable solutions that will be sustainable for the long-term. We don’t know, at the moment, what these solutions will be. Perhaps fusion will finally deliver, maybe a new generation of cellulosic biofuels will have a role, perhaps (as my personal view favours) large scale cheap photovoltaics will be the solution. It’s important to keep the possibilities open.

So, this kind of societally directed, “Grand challenge”, inspired research isn’t necessarily short term, applied research, and although the practicalities of production and scale-up need to integrated at an early stage, it’s not necessarily driven by industry. It needs to preserve a diversity of approaches, to be robust in the face of our inevitable uncertainty.

One of Helga’s contributions to the understanding of modern techno-science has been the idea of “mode II knowledge production”, which she defined in an influential book with Michael Gibbons and others. In this new kind of science, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability.

This idea has been controversial. I think many people accept this represents the direction of travel of recent science. What’s at issue is whether it is a good thing; Helga and her colleagues have been at pains to stress that their work is purely descriptive, and implies no judgement of the desirability of these changes. But many of my colleagues in academic science think they are very definitely undesirable (see my earlier post Mode 2 and its discontents). One interesting point, though, is that in arguing against more directed ways of managing science, many people point to the many very valuable discoveries that have been serendipitously in the course of undirected, investigator driven research. Examples are manifold, from lasers to giant magneto-resistance, to restrict the examples to physics. It’s worth noting, though, that while this is often made as an argument against so-called “instrumental” science, it actually appeals to instrumental values. If you make this argument, you are already conceding that the purpose of science is to yield progress towards economic or political goals; you are simply arguing about the best way to organise science to achieve those goals.

Not that we should think this new. In the manifestos for modern science, written by Francis Bacon, that were so important in defining the mission of this society at its foundation three hundred and fifty years ago, the goal of science is defined as “an improvement in man’s estate and an enlargement of his power over nature”. This was a very clear contextualisation of science for the seventeenth century; perhaps our recontextualisation of science for the 21st century won’t prove so very different.

Easing the transition to a new solar economy

In the run up to the Copenhagen conference, a UK broadcaster has been soliciting opinions from scientists in response to the question “Which idea, policy or technology do you think holds the greatest promise or could deliver the greatest benefit for addressing climate change?”. Here’s the answer given by myself and my colleague Tony Ryan.”

We think the single most important idea about climate change is the optimistic one, that, given global will and a lot of effort to develop the truly sustainable technologies we need, we could emerge from some difficult years to a much more positive future, in which a stable global population lives prosperously and sustainably, supported by the ample energy resources of the sun.

We know this is possible in principle, because the total energy arriving on the planet every day from the sun far exceeds any projection of what energy we might need, even if the earth’s whole population enjoys the standard of living that we in the developed world take for granted.

Our problem is that, since the industrial revolution, we have become dependent on energy in a highly concentrated form, from burning fossil fuels. It’s this that has led, not just to our prosperity in the developed world, but to our very ability to feed the world at its current population levels. Before the industrial revolution, the limits on the population were set by the sun and by the productivity of the land; fossil fuels broke that connection (initially through mechanisation and distribution which led to a small increase in population, but in the last century by allowing us to greatly increase agricultural yields using nitrogen fertilizers made by the highly energy intensive Haber-Bosch process). Now we see that the last three hundred years have been a historical anomaly, powered by fossil fuels in a way that can’t continue. But we can’t go back to pre-industrial ways without mass starvation and a global disaster

So the new technologies we need are those that will allow us to collect, concentrate, store and distribute energy derived from the sun with greater efficiency, and on a much bigger scale, than we have at the moment. These will include new types of solar cells that can be made in very much bigger areas – in hectares and square kilometers, rather than the square meters we have now. We’ll need improvements in crops and agricultural technologies allowing us to grow more food and perhaps to use alternative algal crops in marginal environments for sustainable biofuels, without the need to bring a great deal of extra land into cultivation. And we’ll need new ways of moving energy around and storing it. Working technologies for renewable energies exist now; what’s important to understand is the problem of scale – they simply cannot be deployed on a big enough scale in a short enough time to fill our needs, and the needs of large and fast developing countries like India and China, for plentiful energy in a concentrated form. That’s why new science and new technology is urgently needed to develop these technologies.

This development will take time – with will and urgency, perhaps by 2030 we might see significant progress to a world powered by renewable, sustainable energy. In the meantime, the climate crisis becomes urgent. That’s why we need interim technologies, that already exist in prototype, that will allow us to cross the bridge to the new sunshine powered world. These technologies need development if they aren’t themselves going to store up problems for the future – we need to make carbon capture and storage affordable, and to implement a new generation of nuclear power plants that maximise reliability and minimise waste, and we need to learn how to use the energy we have more efficiently.

The situation we are in is urgent, but not hopeless; there is a positive goal worth striving for. But it will need more than modest lifestyle changes and policy shifts to get there; we need new science and new technology, developed not in the spirit of a naive attempt to implement a “technological fix”, but accompanied by a deep understanding of the world’s social and economic realities.

A crisis of trust?

One sometimes hears it said that there’s a “crisis of trust in science” in the UK, though this seems to be based on impressions rather than evidence. So it’s interesting to see the latest in an annual series of opinion polls comparing the degree of public trust in various professional groups. The polls, carried out by Ipsos Mori, are commissioned by the Royal College of Physicians, who naturally welcome the news that, yet again, doctors are the most trusted profession, with 92% of those polled saying they would trust doctors to tell the truth. But, for all the talk of a crisis of trust in science, scientists as a profession don’t do so badly, either, with 70% of respondents trusting scientists to tell the truth. To put this in context, the professions at the bottom of the table, politics and journalism, are trusted by only 13% and 22% respectively.

The figure below puts this information in some kind of historical context. Since this type of survey began, in 1983, there’s been a remarkable consistency – doctors are at the top of the trust league, journalists and politicians vie for the bottom place, and scientists emerge in the top half. But there does seem to be a small but systematic upward trend for the proportion trusting both doctors and scientists. A headline that would be entirely sustainable on these figures would be “Trust in scientists close to all time high”.

One wrinkle that it would be interesting to see explored more is the fact that there are some overlapping categories here. Professors score higher than scientists for trust, despite the fact that many scientists are themselves professors (me included). Presumably this reflects the fact that people lump together in this category scientists who work directly for government and for industry together with academic scientists; it’s a reasonable guess that the degree to which the public trusts scientists varies according to who they work for. One feature in this set of figures that does interest me is the relatively high degree of trust attached to civil servants, in comparison to the very low levels of trust in politicians. It seems slightly paradoxical that people trust the people who operate the machinery of government more than they trust those entrusted to oversee it on behalf of the people, but this does emphasise that there is by no means a generalised crisis of trust in our institutions in general; instead we see a rather specific failure of trust in politics and journalism, and to a slightly lesser extent business.

Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians.
Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians. Click on the plot for a larger version.

Moral hazard and geo-engineering

Over the last year of financial instability, we’ve heard a lot about moral hazard. This term originally arose in the insurance industry; there it refers to the suggestion that if people are insured against some negative outcome, they may be more liable to behave in ways that increase the risk of that negative outcome arising. So, if your car is insured for all kinds of accident damage, you might be tempted to drive that bit more recklessly, knowing that you won’t have to pay for all the consequences of an accident. In the last year, it’s been all too apparent that the banking system has seen more that its fair share of recklessness, and here the role of moral hazard seems pretty clear – why should one worry about the possibility of a lucrative bet going sour when you think that the taxpayer will bail out your bank, if it’s in danger of going under? The importance of the concept of moral hazard in financial matters is obvious, but it may also be useful when we’re thinking about technological choices.

This issue is raised rather clearly in a report released last week by the UK’s national science academy, the Royal Society – Geoengineering the climate: science, governance and uncertainty. This is an excellent report, but judging by the way it’s been covered in the news, it’s in danger of pleasing no-one. Those environmentalists who regard any discussion of geo-engineering at all as anathema will be dismayed that the idea is gaining any traction at all (and this point of view is not at all out of the mainstream, as this commentary from the science editor the Financial Times shows). Techno-optimists, on the other hand, will be impatient with the obvious serious reservations that the report has about the prospect of geo-engineering. The strongest endorsement of geo-engineering that the report makes is that we should think of it as a plan B, an insurance policy in case serious reductions in CO2 emission don’t prove possible. But, if investigating geo-engineering is an insurance policy, the report asks, won’t it subject us to the precise problem of moral hazard?

Unquestionably, people unwilling to confront the need for the world to make serious reductions to CO2 emissions will take comfort in the idea that geo-engineering might offer another way of mitigating dangerous climate change; in this sense the parallel with moral hazard in insurance and banking is exact. There are parallels in the potential catastrophic consequences of this moral hazard, as well. It’s likely that the largest costs won’t fall on the people who benefit most from the behaviour that’s encouraged by the belief that geo-engineering will be able to save them from the worst consequences of their actions. And in the event of the insurance policy being needed, it may not be able to pay out – the geo-engineering methods available may not end up being sufficient to avert disaster (and, indeed, through unanticipated consequences may make matters worse). On the other hand, the report wonders whether seeing geo-engineering being taken seriously might have the opposite effect – convincing some people that if such drastic measures are being contemplated, then urgent action to reduce emissions really is needed. I can’t say I’m hugely convinced by this last argument.

Mode 2 and its discontents

This essay was first published in the August 2008 issue of Nature Nanotechnology – Nature Nanotechnology 3 448 (2008) (subscription required for full online text).

A water-tight definition of nanotechnology still remains elusive, at least if we try and look at the problem from a scientific or technical basis. Perhaps this means we are looking in the wrong place, and we should instead seek a definition that’s essentially sociological. Here’s one candidate: “Nanotechnology is the application of mode 2 values to the physical sciences” . The jargon here is a reference to the influential 1994 book, “The New Production of Knowledge”, by Michael Gibbons and coworkers . Their argument was that the traditional way in which knowledge is generated – in a disciplinary framework, with a research agenda set from within that discipline – was being increasingly supplemented by a new type of knowledge production which they called “Mode 2”. In mode 2 knowledge production, they argue, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability. It’s easy to argue that the difference between nanotechnology research as it is developing in countries across the world, and the traditional disciplines from which it has emerged, like solid state physics and physical chemistry, very much fits this pattern.

Gibbons and his coauthors always argued that what they were doing was simply observing, in a neutral way, how things were moving. But it’s a short journey from “is” to “ought”, and many observers have seized on these observations as a prescription for how science should change. Governments seeking to extract demonstrable and rapid economic returns from their tax-payers’ investments in publicly funded science, and companies and entrepreneurs seeking more opportunities to make money from scientific discoveries, look to this model as a blueprint to reorganise science the better to deliver what they want. On the other hand, those arguing for a different relationship between science and society , with more democratic accountability and greater social reflexivity on the part of scientists, see these trends as an opportunity to erase the distinction between “pure” and “applied” science and to press all scientists to take much more explicit consideration of the social context in which they operate.

It’s not surprising that some scientists, for whom the traditional values of science as a source of disinterested and objective knowledge are precious, regard these arguments as assaults by the barbarians at the gates of science. Philip Moriarty made the opposing case very eloquently in an earlier article in Nature Nanotechnology (Reclaiming academia from post-academia, abstract, subscription required for full article) , arguing for the importance of “non-instrumental science” in the “post-academic world”. Here he follows John Ziman in contrasting instrumental science, directed to economic or political goals, with non-instrumental science, which, it is argued, has wider benefits to society in creating critical scenarios, promoting rational attitudes and developing a cadre of independent and enlightened experts.

What is striking, though, is that many of the key arguments made against instrumental science actually appeal to instrumental values. Thus, it is possible to make impressive lists of the discoveries made by undirected, investigator driven science that have led to major social and economic impacts – from the laser to the phenomenon of giant magneto-resistance that’s been so important in developing hard disk technology. This argument, then, is actually not an argument against the ends of instrumental science – it implicitly accepts that the development of economically or socially important products is an important outcome of science. Instead, it’s an argument about the best means by which instrumental science should be organised. The argument is not that science shouldn’t seek to produce direct impacts on society. It is that this goal can sometimes be more effectively, albeit unpredictably, reached by supporting gifted scientists as they follow their own instincts, rather than attempting to manage science from the top down. Likewise, arguments about the superiority of the “open-source” model of science, in which information is freely exchanged, to proprietary models of knowledge production in which intellectual property is closely guarded and its originators receive the maximum direct financial reward, don’t fundamentally question the proposition that science should lead to societal benefits, they simply question whether the proprietary model is the best way of delivering them. This argument was perhaps most pointed at the time of the sequencing of the human genome, where a public sector, open source project was in direct competition with Craig Venter’s private sector venture. Defenders of the public sector project, notably Sir John Sulston, were eloquent and idealistic in its defense, but what their argument rested on was the conviction that the societal benefits of this research would be maximised if its results remained in the public domain.

The key argument of the mode 2 proponents is that science needs to be recontextualised – placed into a closer relationship with society. But, how this is done is essentially a political question. For those who believe that market mechanisms offer the most reliable way by which societal needs are prioritised and met, the development of more effective paths from science to money-making products and services will be a necessary and sufficient condition for making science more closely aligned with society. But those who are less convinced by such market fundamentalism will seek other mechanisms, such as public engagement and direct government action, to create this alignment.

Perhaps the most telling criticism of the “Mode 2” concept is the suggestion that, rather than being a recent development, science being carried out in the context of application has actually been the rule for most of its history. Certainly, in the nineteenth century, there was a close, two-way interplay between science and application, well seen, for example, in the relationship between the developing science of thermodynamics and the introduction of new and refined heat engines. In this view, the idea of science as being autonomous and discpline-based is an anomaly of the peculiar conditions of the second half of the twentieth century, driven by the expansion of higher education, the technology race of the cold war and the reflected glory of science’s contribution to allied victory in the second world war.

At each point in history, then, the relationship between science and the society ends up being renegotiated according the perceived demands of the time. What is clear, though, is that right from the beginning of modern science there has been this tension about its purpose. After all, for one of the founders of the ideology of the modern scientific project, Francis Bacon, its aims were “an improvement in man’s estate and an enlargement of his power over nature”. These are goals that, I think, many nanotechnologists would still sign up to.