Archive for the ‘Social and economic aspects of nanotechnology’ Category

Why and how should governments fund basic research?

Wednesday, December 2nd, 2009

Yesterday I took part in a Policy Lab at the Royal Society, on the theme The public nature of science – Why and how should governments fund basic research? I responded to a presentation by Professor Helga Nowotny, the Vice-President of the European Research Council, saying something like the following:

My apologies to Helga, but my comments are going to be rather UK-centric, though I hope they illustrate some of the wider points she’s made.

This is a febrile time in British science policy.

We have an obsession amongst both the research councils and the HE funding bodies with the idea of impact – how can we define and measure the impact that research has on wider society? While these bodies are at pains to define impact widely, involving better policy outcomes, improvements in quality of life and broader culture, there is much suspicion that all that really counts is economic impact.

We have had a number of years in which the case that science produces direct and measurable effects on economic growth and jobs has been made very strongly, and has been rewarded by sustained increases in public science spending. There is a sense that these arguments are no longer as convincing as they were a few years ago, at least for the people in Treasury who are going to be making the crucial spending decisions at a time of fiscal stringency. As Helga argues, the relationship between economic growth in the short term, at a country level, and spending on scientific R&D is shaky, at best.

And in response to these developments, we have a deep unhappiness amongst the scientific community at what’s perceived as a shift from pure, curiosity driven, blue skies research into research and development.

What should our response to this be?

One response is to up the pressure on scientists to deliver economic benefits. This, to some extent, is what’s happening in the UK. One problem with this approach is that It probably overstates the importance of basic science in the innovation system. Scientists aren’t the only people who are innovators – innovation takes place in industry, in the public sector, it can involve customers and users too. Maybe our innovation system does need fixing, but it’s not obvious what needs most attention is what scientists do. But certainly, we should look at ways to open up the laboratory, as Helga puts it, and to look at the broader institutional and educational preconditions that allow science-based innovation to flourish.

Another response is to argue that the products of free scientific inquiry have intrinsic societal worth, and should be supported “as an ornament to civilisation”. Science is like the opera, something we support because we are civilised. One trouble with this argument is that it involves a certain degree of personal taste – I dislike opera greatly, and who’s to say that others won’t have the same feeling about astronomy? An even more serious argument is that we don’t actually support the arts that much, in financial terms, in comparison to the science budget. On this argument we’d be employing a lot fewer scientists than we are now (and probably paying them less).

A third response is to emphasise science’s role in solving the problems of society, but emphasising the long-term nature of this project. The idea is to direct science towards broad societal goals. Of course, as soon as one has said this one has to ask “whose goals?” – that’s why public engagement, and indeed politics in the most general sense, becomes important. In Helga’s words, we need to “recontextualise” science for current times. It’s important to stress that, in this kind of “Grand Challenge” driven science, one should specify a problem – not a solution. It is important, as well, to think clearly about different time scales, to put in place possibilities for the long term as well as responding to the short term imperative.

For example, the problem of moving to low-carbon energy sources is top of everyone’s list of grand challenges. We’re seeing some consensus (albeit not a very enthusiastic one) around the immediate need to build new nuclear power stations, to implement carbon capture and storage and to expand wind power, and research is certainly needed to support this, for example to reduce the high cost and energy overheads of carbon capture and storage. But it’s important to recognise that many of these solutions will be at best stop-gap, interim solutions, and to make sure we’re putting the research in place to enable solutions that will be sustainable for the long-term. We don’t know, at the moment, what these solutions will be. Perhaps fusion will finally deliver, maybe a new generation of cellulosic biofuels will have a role, perhaps (as my personal view favours) large scale cheap photovoltaics will be the solution. It’s important to keep the possibilities open.

So, this kind of societally directed, “Grand challenge”, inspired research isn’t necessarily short term, applied research, and although the practicalities of production and scale-up need to integrated at an early stage, it’s not necessarily driven by industry. It needs to preserve a diversity of approaches, to be robust in the face of our inevitable uncertainty.

One of Helga’s contributions to the understanding of modern techno-science has been the idea of “mode II knowledge production”, which she defined in an influential book with Michael Gibbons and others. In this new kind of science, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability.

This idea has been controversial. I think many people accept this represents the direction of travel of recent science. What’s at issue is whether it is a good thing; Helga and her colleagues have been at pains to stress that their work is purely descriptive, and implies no judgement of the desirability of these changes. But many of my colleagues in academic science think they are very definitely undesirable (see my earlier post Mode 2 and its discontents). One interesting point, though, is that in arguing against more directed ways of managing science, many people point to the many very valuable discoveries that have been serendipitously in the course of undirected, investigator driven research. Examples are manifold, from lasers to giant magneto-resistance, to restrict the examples to physics. It’s worth noting, though, that while this is often made as an argument against so-called “instrumental” science, it actually appeals to instrumental values. If you make this argument, you are already conceding that the purpose of science is to yield progress towards economic or political goals; you are simply arguing about the best way to organise science to achieve those goals.

Not that we should think this new. In the manifestos for modern science, written by Francis Bacon, that were so important in defining the mission of this society at its foundation three hundred and fifty years ago, the goal of science is defined as “an improvement in man’s estate and an enlargement of his power over nature”. This was a very clear contextualisation of science for the seventeenth century; perhaps our recontextualisation of science for the 21st century won’t prove so very different.

Easing the transition to a new solar economy

Sunday, November 8th, 2009

In the run up to the Copenhagen conference, a UK broadcaster has been soliciting opinions from scientists in response to the question “Which idea, policy or technology do you think holds the greatest promise or could deliver the greatest benefit for addressing climate change?”. Here’s the answer given by myself and my colleague Tony Ryan.”

We think the single most important idea about climate change is the optimistic one, that, given global will and a lot of effort to develop the truly sustainable technologies we need, we could emerge from some difficult years to a much more positive future, in which a stable global population lives prosperously and sustainably, supported by the ample energy resources of the sun.

We know this is possible in principle, because the total energy arriving on the planet every day from the sun far exceeds any projection of what energy we might need, even if the earth’s whole population enjoys the standard of living that we in the developed world take for granted.

Our problem is that, since the industrial revolution, we have become dependent on energy in a highly concentrated form, from burning fossil fuels. It’s this that has led, not just to our prosperity in the developed world, but to our very ability to feed the world at its current population levels. Before the industrial revolution, the limits on the population were set by the sun and by the productivity of the land; fossil fuels broke that connection (initially through mechanisation and distribution which led to a small increase in population, but in the last century by allowing us to greatly increase agricultural yields using nitrogen fertilizers made by the highly energy intensive Haber-Bosch process). Now we see that the last three hundred years have been a historical anomaly, powered by fossil fuels in a way that can’t continue. But we can’t go back to pre-industrial ways without mass starvation and a global disaster

So the new technologies we need are those that will allow us to collect, concentrate, store and distribute energy derived from the sun with greater efficiency, and on a much bigger scale, than we have at the moment. These will include new types of solar cells that can be made in very much bigger areas – in hectares and square kilometers, rather than the square meters we have now. We’ll need improvements in crops and agricultural technologies allowing us to grow more food and perhaps to use alternative algal crops in marginal environments for sustainable biofuels, without the need to bring a great deal of extra land into cultivation. And we’ll need new ways of moving energy around and storing it. Working technologies for renewable energies exist now; what’s important to understand is the problem of scale – they simply cannot be deployed on a big enough scale in a short enough time to fill our needs, and the needs of large and fast developing countries like India and China, for plentiful energy in a concentrated form. That’s why new science and new technology is urgently needed to develop these technologies.

This development will take time – with will and urgency, perhaps by 2030 we might see significant progress to a world powered by renewable, sustainable energy. In the meantime, the climate crisis becomes urgent. That’s why we need interim technologies, that already exist in prototype, that will allow us to cross the bridge to the new sunshine powered world. These technologies need development if they aren’t themselves going to store up problems for the future – we need to make carbon capture and storage affordable, and to implement a new generation of nuclear power plants that maximise reliability and minimise waste, and we need to learn how to use the energy we have more efficiently.

The situation we are in is urgent, but not hopeless; there is a positive goal worth striving for. But it will need more than modest lifestyle changes and policy shifts to get there; we need new science and new technology, developed not in the spirit of a naive attempt to implement a “technological fix”, but accompanied by a deep understanding of the world’s social and economic realities.

A crisis of trust?

Wednesday, September 30th, 2009

One sometimes hears it said that there’s a “crisis of trust in science” in the UK, though this seems to be based on impressions rather than evidence. So it’s interesting to see the latest in an annual series of opinion polls comparing the degree of public trust in various professional groups. The polls, carried out by Ipsos Mori, are commissioned by the Royal College of Physicians, who naturally welcome the news that, yet again, doctors are the most trusted profession, with 92% of those polled saying they would trust doctors to tell the truth. But, for all the talk of a crisis of trust in science, scientists as a profession don’t do so badly, either, with 70% of respondents trusting scientists to tell the truth. To put this in context, the professions at the bottom of the table, politics and journalism, are trusted by only 13% and 22% respectively.

The figure below puts this information in some kind of historical context. Since this type of survey began, in 1983, there’s been a remarkable consistency – doctors are at the top of the trust league, journalists and politicians vie for the bottom place, and scientists emerge in the top half. But there does seem to be a small but systematic upward trend for the proportion trusting both doctors and scientists. A headline that would be entirely sustainable on these figures would be “Trust in scientists close to all time high”.

One wrinkle that it would be interesting to see explored more is the fact that there are some overlapping categories here. Professors score higher than scientists for trust, despite the fact that many scientists are themselves professors (me included). Presumably this reflects the fact that people lump together in this category scientists who work directly for government and for industry together with academic scientists; it’s a reasonable guess that the degree to which the public trusts scientists varies according to who they work for. One feature in this set of figures that does interest me is the relatively high degree of trust attached to civil servants, in comparison to the very low levels of trust in politicians. It seems slightly paradoxical that people trust the people who operate the machinery of government more than they trust those entrusted to oversee it on behalf of the people, but this does emphasise that there is by no means a generalised crisis of trust in our institutions in general; instead we see a rather specific failure of trust in politics and journalism, and to a slightly lesser extent business.

Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians.

Trust in professions in the UK, as revealed by the annual IPSOS/MORI survey carried out for the Royal College of Physicians. Click on the plot for a larger version.

Moral hazard and geo-engineering

Sunday, September 6th, 2009

Over the last year of financial instability, we’ve heard a lot about moral hazard. This term originally arose in the insurance industry; there it refers to the suggestion that if people are insured against some negative outcome, they may be more liable to behave in ways that increase the risk of that negative outcome arising. So, if your car is insured for all kinds of accident damage, you might be tempted to drive that bit more recklessly, knowing that you won’t have to pay for all the consequences of an accident. In the last year, it’s been all too apparent that the banking system has seen more that its fair share of recklessness, and here the role of moral hazard seems pretty clear – why should one worry about the possibility of a lucrative bet going sour when you think that the taxpayer will bail out your bank, if it’s in danger of going under? The importance of the concept of moral hazard in financial matters is obvious, but it may also be useful when we’re thinking about technological choices.

This issue is raised rather clearly in a report released last week by the UK’s national science academy, the Royal Society – Geoengineering the climate: science, governance and uncertainty. This is an excellent report, but judging by the way it’s been covered in the news, it’s in danger of pleasing no-one. Those environmentalists who regard any discussion of geo-engineering at all as anathema will be dismayed that the idea is gaining any traction at all (and this point of view is not at all out of the mainstream, as this commentary from the science editor the Financial Times shows). Techno-optimists, on the other hand, will be impatient with the obvious serious reservations that the report has about the prospect of geo-engineering. The strongest endorsement of geo-engineering that the report makes is that we should think of it as a plan B, an insurance policy in case serious reductions in CO2 emission don’t prove possible. But, if investigating geo-engineering is an insurance policy, the report asks, won’t it subject us to the precise problem of moral hazard?

Unquestionably, people unwilling to confront the need for the world to make serious reductions to CO2 emissions will take comfort in the idea that geo-engineering might offer another way of mitigating dangerous climate change; in this sense the parallel with moral hazard in insurance and banking is exact. There are parallels in the potential catastrophic consequences of this moral hazard, as well. It’s likely that the largest costs won’t fall on the people who benefit most from the behaviour that’s encouraged by the belief that geo-engineering will be able to save them from the worst consequences of their actions. And in the event of the insurance policy being needed, it may not be able to pay out – the geo-engineering methods available may not end up being sufficient to avert disaster (and, indeed, through unanticipated consequences may make matters worse). On the other hand, the report wonders whether seeing geo-engineering being taken seriously might have the opposite effect – convincing some people that if such drastic measures are being contemplated, then urgent action to reduce emissions really is needed. I can’t say I’m hugely convinced by this last argument.

Food nanotechnology – their Lordships deliberate

Tuesday, June 30th, 2009

Today I found myself once again in Westminster, giving evidence to a House of Lords Select Committee, which is currently carrying out an inquiry into the use of nanotechnology in food. Readers not familiar with the intricacies of the British constitution need to know that the House of Lords is one of the branches of Parliament, the UK legislature, with powers to revise and scrutinise legislation, and through its select committees, hold the executive to account. Originally its membership was drawn from the hereditary peerage, with a few bishops thrown in; recently as part of a slightly ramshackle program of constitutional reform the influence of the hereditaries has been much reduced, with the majority of the chamber being made up of members appointed for life by the government. These are drawn from former politicians and others prominent in public life. Whatever the shortcomings of this system from the democratic point of view, it does mean that the membership includes some very well informed people. This inquiry, for example, is being chaired by Lord Krebs, a very distinguished scientist who previously chaired the Food Standards Agency.

All the evidence submitted to the committee is publicly available on their website; this includes submissions from NGOs, Industry Organisations, scientific organisations and individual scientists. There’s a lot of material there, but together it’s actually a pretty good overview of all sides of the debate. I’m looking forward to seeing their Lordships’ final report.

Are electric cars the solution?

Tuesday, April 28th, 2009

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Another step towards (even) cheaper DNA sequencing

Friday, April 17th, 2009

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

The Economy of Promises

Sunday, February 8th, 2009

This essay was first published in Nature Nanotechnology 3 p65 (2008), doi:10.1038/nnano.2008.14.

Can nanotechnology cure cancer by 2015? That’s the impression that many people will have taken from the USA’s National Cancer Institute’s Cancer Nanotechnology Plan [1], which begins with the ringing statement “to help meet the Challenge Goal of eliminating suffering and death from cancer by 2015, the National Cancer Institute (NCI) is engaged in a concerted effort to harness the power of nanotechnology to radically change the way we diagnose, treat, and prevent cancer.” No-one doubts that nanotechnology potentially has a great deal to contribute to the struggle against cancer; new sensors promise earlier diagnosis, and new drug delivery systems for chemotherapy offer useful increases in survival rates. But this is a long way from eliminating suffering and death within 7 years. Now, a close textual analysis of the NCI’s document shows that actually there’s no explicit claim that nanotechnology will cure cancer by 2015; the talk is of “challenge goals” and “lowering barriers”. But is it wise to make it so easy to draw this conclusion from a careless reading?

It’s hardly a new insight to observe that the development of nanotechnology has been accompanied by exaggeration and oversold promises (there is, indeed, a comprehensive book documenting this aspect of the subject’s history – Nanohype, by David Berube [2]). It’s tempting for scientists to plead their innocence and try to maintain some distance from this. After all, the origin of the science fiction visions of nanobots and universal assemblers is in fringe movements such as the transhumanists and singularitarians, rather than mainstream nanoscience. And the hucksterism that has gone with some aspects of the business of nanotechnology seems to many scientists a long way from academia. But are scientists completely blameless in the development of an “economy of promises” surrounding nanotechnology?

Of course, the way most people hear about new scientific developments is through the mass media rather than through the scientific literature. The process by which a result from an academic nano-laboratory is turned into an item in the mainstream media naturally emphasises dramatic and newsworthy potential impacts of the research; the road from the an academic paper to a press release from a University press office is characterised by a systematic stripping away of the cautious language, and a transformation of vague possible future impacts into near-certain outcomes. The key word here is “could” – how often do we read in the press release accompanying a solid, but not revolutionary, paper in Nature or Physical Review Letters that the research “could” lead to revolutionary and radical developments in technology or medicine?

Practical journalism can’t deal with the constant hedging that comes so naturally to scientists, we’re told, so many scientists acquiesce in this process. The chosen “expert” commentators on these stories are often not those with the deepest technical knowledge of issues, but those who combine communication skills with a willingness to press an agenda of superlative technology outcomes.

An odd and unexpected feature of the way the nanotechnology debate has unfolded is that the concern to anticipate societal impacts and consider ethical dimensions of nanotechnology has itself contributed to the climate of heightened expectations. As the philosopher Alfred Nordmann notes in his paper If and then: a critique of speculative nanoethics (PDF) [3], speculations on the ethical and societal implications of the more extreme extrapolations of nanotechnology serve implicitly to give credibility to such visions. If a particular outcome of technology is conceivable and cannot be demonstrated to be contrary to the laws of nature, then we are told it is irresponsible not to consider its possible impacts on society. In this way questions of plausibility or practicality are put aside. In the case of nanotechnology, we have organisations like the Foresight Nanotech Institute and the Centre for Responsible Nanotechnology, whose ostensible purpose is to consider the societal implications of advanced nanotechnology, but which in reality are advocacy organisations for the particular visions of radical nanotechnology originally associated with Eric Drexler. As the field of “nanoethics” grows, and brings in philosophers and social scientists, it’s inevitable that there will be a tendency to give these views more credibility than academic nanoscientists would like.

Scientists, then, can feel a certain powerlessness about the way the more radical visions of nanotechnology have taken root in the public sphere and retain their vigour. It may seem that there’s not a lot scientists can do about the media treats science stories; certainly no-one made much of a media career by underplaying the potential significance of scientific developments. This isn’t to say that within the constraints of the requirements of the media, scientists shouldn’t exercise responsibility and integrity. But perhaps the “economy of promises” is embedded more deeply in the scientific enterprise than this.

One class of document that is absolutely predicated on promises is the research proposal. As we see more and more pressure from funding agencies to do research with a potential economic impact, it’s inevitable that scientists will get into the habit of making more firmly what might be quite tenuous claims that their research will lead to spectacular outcomes. It’s perhaps also understandable that the conflict between this and more traditional academic values might lead to a certain cynicism; scientists have their own ways of justifying their work to themselves, which might mitigate any guilt they might feel about making inflated or unfeasible claims about the ultimate applications of their work. One way of justifying what might seem somewhat reckless claims about is the observation that science and technology have indeed produced huge impacts on society and the economy, even if these impacts were unforeseen at the time of the original research work. Thus one might argue to oneself that even though the claims made by researchers individually might be implausible, collectively one might have a great deal more confidence that the research enterprise as a whole will deliver important results.

Thus scientists may not be at all confident that their own work will have a big impact, but are confident that science in general will deliver big benefits. On the other hand, the public have long memories for promises that science and technology have made but failed to deliver (the idea that nuclear power would produce electricity “too cheap to meter” being one of the most notorious). This, if nothing else, suggests that the nanoscience community would do well to be responsible in what they promise.

1. http://nano.cancer.gov/about_alliance/cancer_nanotechnology_plan.asp
2. Berube, D. Nanohype, (Prometheus Books, Amherst NY, 2006)
3. Nordmann, A. NanoEthics 1, 31-46 (2007).

Brownian motion and how to run a lottery (or a bank)

Monday, January 26th, 2009

This entry isn’t really about nanotechnology at all; instead it’s a ramble around some mathematics that I find interesting, that suddenly seems to have become all too relevant in the financial crisis we find ourselves in. I don’t claim great expertise in finance, so my apologies in advance for any inaccuracies.

Brownian motion – the continuous random jiggling of nanoscale objects and structures that’s a manifestation of the random nature of heat energy – is a central feature of the nanoscale world, and much of my writing about nanotechnology revolves around how we should do nanoscale engineering in a way that exploits Brownian motion, in the way biology does. In this weekend’s magazine reading, I was struck to see some of the familiar concepts from the mathematics of Brownian motion showing up, not in Nature, but in an article in The Economist’s special section on the future of the finance – In Plato’s Cave, which explains how much of the financial mess we find ourselves in derives from the misapplication of these ideas. Here’s my attempt to explain, as simply as possible, the connection.

The motion of a particle undergoing Brownian motion can be described as a random walk, with a succession of steps in random directions. For every step taken in one direction, there’s an equal probability that the particle will go the same distance in the opposite direction, yet on average a particle doing a random walk does make some progress – the average distance gone grows as the square root of the number of steps. To see this for a simple situation, imagine that the particle is moving on a line, in one dimension, and either takes a step of one unit to the right (+1) or one unit to the left (-1), so we can track its progress just by writing down all the steps and adding them up, like this, for example: (+1 -1 +1 …. -1) . After N steps, on average the displacement (i.e. the distance gone, including a sign to indicate the direction) will be zero, but the average magnitude of the distance isn’t zero. To see this, we just look at the square root of the average value of the square of the displacement (since squaring the displacement takes away any negative signs). So we need to expand a product that looks something like (+1 -1 +1 …. -1) x (+1 -1 +1 …. -1). The first term of the first bracket times the first term of the second bracket is always +1 (since we either have +1 x +1 or -1 x -1), and the same is true for all the products of terms in the same position in both brackets. There are N of these, so this part of the product adds up to N. All the other terms in the expansion are one of (+1 x +1), (+1 x -1), (-1 x +1), (-1 x -1), and if the successive steps in the walk really are uncorrelated with each other these occur with equal probability so that on average adding all these up gives us zero. So we find that the mean squared distance gone in N steps is N. Taking the square root of this to get a measure of the average distance gone in N steps, we find this (root mean squared) distance is the square root of N.

The connection of these arguments to financial markets is simple. According the efficient market hypothesis, at any given time all the information relevant to the price of some asset, like a share, is already implicit in its price. This implies that the movement of the price with time is essentially a random walk. So, if you need to calculate what a fair value is for, say, an option to buy this share in a year’s time, you can do this equipped with statistical arguments about the likely movement of a random walk, of the kind I’ve just outlined. It is a smartened-up version of the theory of random walks that I’ve just explained that is the basis of the Black-Scholes model for pricing options, which is what made the huge expansion of trading of complex financial derivatives possible – as the Economist article puts it “The Black-Scholes options-pricing model was more than a piece of geeky mathematics. It was a manifesto, part of a revolution that put an end to the anti-intellectualism of American finance and transformed financial markets from bull rings into today’s quantitative powerhouses… The new model showed how to work out an option price from the known price-behaviour of a share and a bond. … . Confidence in pricing gave buyers and sellers the courage to pile into derivatives. The better that real prices correlate with the unknown option price, the more confidently you can take on any level of risk.”

Surely such a simple model can’t apply to a real market? Of course, we can develop more complex models that lift many of the approximations in the simplest theory, but it turns out that some of the key results of the theory remain. The most important result is the basic √N scaling of the expected movement. For example, my simple derivation assumed all steps are the same size – we know that some days, prices rise or fall a lot, sometimes not so much. So what happens if we have a random walk with step sizes that are themselves random. It’s easy to convince oneself that the derivation stays the same, but instead of adding up N occurrences of (-1 x -1) or (+1 x +1) we have N occurrences of (a x a), where the probability that the step size has value a is given by p(a). So we end up with the simple modification that the mean squared distance gone is N times the mean of the square of the step size. So this is a fairly simple modification, which, crucially, doesn’t affect the √N scaling.

But, and this is the big but, there’s a potentially troublesome hidden assumption here, which is that the distribution of step sizes actually has a well defined, well behaved mean squared value. We’d probably guess that the distribution of step sizes looks like a bell shaped curve, centred on zero and getting smaller the further away one gets from the origin. The familiar Gaussian curve fits the bill, and indeed such a curve is characterised by a well defined mean squared value which measures the width of the curve ( mathematically, a Gaussian is described by a distribution of step sizes a given by p(a) proportional to exp(-a/2s^2), which gives a root mean squared value of step size s). Gaussian curves are very common, for reasons described later, so this all looks very straightforward. But one should be aware that not all bell-shaped curves behave so well. Consider a distribution of step sizes a given by p(a) proportional to 1/(a^2+s^2). This curve (which is known in the trade as a Lorentzian), looks bell shaped and is characterised by a width s. But, when we try to find the average value of the square of the step size, we get an answer that diverges – it’s effectively infinite. The problem is that although the probability of getting a very large step goes to zero as the step size gets larger, it doesn’t go to zero very fast. Rather than the chance of a very large jump becoming exponentially small, as happens for a Gaussian, the chance goes to zero as the inverse square of the step size. This apparently minor difference is enough to completely change the character of the random walk. One needs entirely new mathematics to describe this sort of random walk (which is known as a Levy flight) – and in particular one ends up with a different scaling of the distance gone with the number of steps.

In the jargon, this kind of distribution is known as having a “fat tail”, and it was not factoring in the difference between a fat tailed distribution and a Gaussian or normal distribution that led the banks to so miscalculate their “value at risk”. In the words of the Economist article, the mistake the banks made “was to turn a blind eye to what is known as “tail risk”. Think of the banks’ range of possible daily losses and gains as a distribution. Most of the time you gain a little or lose a little. Occasionally you gain or lose a lot. Very rarely you win or lose a fortune. If you plot these daily movements on a graph, you get the familiar bell-shaped curve of a normal distribution (see chart 4). Typically, a VAR calculation cuts the line at, say, 98% or 99%, and takes that as its measure of extreme losses. However, although the normal distribution closely matches the real world in the middle of the curve, where most of the gains or losses lie, it does not work well at the extreme edges, or “tails”. In markets extreme events are surprisingly common—their tails are “fat”. Benoît Mandelbrot, the mathematician who invented fractal theory, calculated that if the Dow Jones Industrial Average followed a normal distribution, it should have moved by more than 3.4% on 58 days between 1916 and 2003; in fact it did so 1,001 times. It should have moved by more than 4.5% on six days; it did so on 366. It should have moved by more than 7% only once in every 300,000 years; in the 20th century it did so 48 times.”

But why should the experts in the banks have made what seems such an obvious mistake? One possibility goes back to the very reason why the Gaussian, or normal, distribution, is so important and seems so ubiquitous. This comes from a wonderful piece of mathematics called the central limit theorem. This says that if some random variable is made up from the combination of many independent variables, even if those variables aren’t themselves taken from a Gaussian distribution, their sum will be in the limit of many variables. So, given that market movements are the sum of the effects of lots of different events, the central limit theorem would tell us to expect the size of the total market movement to be distributed according to a Gaussian, even if the individual events were described by a quite different distribution. The central limit theorem has a few escape clauses, though, and perhaps the most important one arises from the way one approaches the limit of large numbers. Roughly speaking, the distribution converges to a Gaussian in the middle first. So it’s very common to find empirical distributions that look Gaussian enough in the middle, but still have fat tails, and this is exactly the point Mandelbrot is quoted as making about the Dow Jones.

The Economist article still leaves me puzzled, though as everything I’ve been describing has been well known for many years. But maybe well known isn’t the same as widely understood. Just like a lottery, the banks were trading the certainty of many regular small payments against a small probability of making a big payout. But, unlike the lottery, they didn’t get the price right, because they underestimated the probability of making a big loss. And now, their loss becomes the loss of the world’s taxpayers.

Public Engagement and Nanotechnology – the UK experience

Tuesday, January 13th, 2009

What do the public think about nanotechnology? This is a question that has worried scientists and policy makers ever since the subject came to prominence. In the UK, as in other countries, we’ve seen a number of attempts to engage with the public around the subject. This article, written for an edited book about public engagement with science more generally in the UK, attempts to summarise the UK’s experience in this area.

From public understanding to public engagement

Nanotechnology emerged as a focus of public interest and concern in the UK in 2003, prompted, not least, by a high profile intervention on the subject from the Prince of Wales. This was an interesting time in the development of thinking about public engagement with science. A consensus about the underlying philosophy underlying the public understanding of science movement, dating back to the Bodmer report (PDF) in 1985, had begun to unravel. This was prompted, on the one hand, by sustained and influential critique of some of the assumptions underlying PUS from social scientists, particularly from the Lancaster school associated with Brian Wynne. On the other hand, the acrimony surrounding the public debates about agricultural biotechnology and the government’s handling of the bovine spongiform encephalopathy outbreak led many to diagnosis a crisis of trust between the public and the world of science and technology.

In response to these difficulties, a rather different view of the way scientists and the public should interact gained currency. According to the critique of Wynne and colleagues, the idea of “Public Understanding of Science” was founded on a “deficit model”, which assumed that the key problem in the relationship between the public and science was an ignorance on the part of the public both of the basic scientific facts and of the fundamental process of science, and if these deficits in knowledge were corrected the deficit in trust would disappear. To Wynne, this was both patronizing, in that it disregarded the many forms of expertise possessed by non-scientists, and highly misleading, in that it neglected the possibility that public concerns about new technologies might revolve around perceptions of the weaknesses of the human institutions that proposed to implement them, and not on technical matters at all.

The proposed remedy for the failings of the deficit model was to move away from an emphasis on promoting the public understanding of science to a more reflexive approach to engaging with the public, with an effort to achieve a real dialogue between the public and the scientific community. Coupled with this was a sense that the place to begin this dialogue was upstream in the innovation process, while there was still scope to steer its direction in ways which had broad public support. These ideas were succinctly summarised in a widely-read pamphlet from the think-tank Demos, “See-through science – why public engagement needs to move upstream ” .

Enter nanotechnology

In response to the growing media profile of nanotechnology, in 2003 the government commissioned the Royal Society and the Royal Academy of Engineering to carry out a wide-ranging study on nanotechnology and the health and safety, environmental, ethical and social issues that might stem from it. The working group included, in addition to distinguished scientists, a philosopher, a social scientist and a representative of an environmental NGO. The process of producing the report itself involved public engagement, with two in-depth workshops exploring the potential hopes and concerns that members of the public might have about nanotechnology.

The report – “Nanoscience and nanotechnologies: opportunities and uncertainties” – was published in 2004, and amongst its recommendations was a whole-hearted endorsement of the upstream public engagement approach: “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.”

Following this recommendation, a number of public engagement activities around nanotechnology have taken place in the UK. Two notable examples were Nanojury UK, a citizens’ jury which took place in Halifax in the summer of 2005, and Nanodialogues, a more substantial project which linked four separate engagement exercises carried out in 2006 and 2007.

Nanojury UK was sponsored jointly by the Cambridge University Nanoscience Centre and Greenpeace UK, with the Guardian as a media partner, and Newcastle University’s Policy, Ethics and Life Sciences Research Centre running the sessions. It was carried out in Halifax over eight evening sessions, with six witnesses drawn from academic science, industry and campaigning groups, considering a wide variety of potential applications of nanotechnology. Nanodialogues took a more focused approach; each of its four exercises, which were described as “experiments”, considered a single aspect or application area of nanotechnology. These included a very concrete example of a proposed use for nanotechnology – a scheme to use nanoparticles to remediate polluted groundwater – and the application of nanoscience in the context of a large corporation.

The Nanotechnology Engagement Group provided a wider forum to consider the lessons to be learnt from these and other public engagement exercises both in the UK and abroad; this reported in the summer of 2007 (the report is available here). This revealed a rather consistent message from public engagement. Broadly speaking, there was considerable excitement from the public about possible beneficial outcomes from nanotechnology, particularly in potential applications such as renewable energy, and medical applications. The more general value of such technologies in promoting jobs and economic growth were also recognised.

There were concerns, too. The questions that have been raised about potential safety and toxicity issues associated with some nanoparticles caused disquiet, and there were more general anxieties (probably not wholly specific to nanotechnology) about who controls and regulates new technology.

Reviewing a number of public engagement activities related to nanotechnology also highlighted some practical and conceptual difficulties. There was sometimes a lack of clarity about the purpose and role of public engagement; this leaves space for the cynical view that such exercises are intended, not to have a real influence on genuinely open decisions, but simply to add a gloss of legitimacy to decisions that have already been made. Related to this is the fact that bodies that might benefit from public engagement may lack institutional capacity and structure to benefit from it.

There are some more practical problems associated with the very idea of moving engagement “upstream” – the further the science is away from potential applications, the more difficult it can be both to communicate what can be complex issues, whose impact and implications may be subject to considerable disagreement amongst experts.

Connecting public engagement to policy

The big question to be asked about any public engagement exercise is “what difference has it made” – has there been any impact on policy? For this to take place there needs to be careful choice of the subject for the public engagement, as well as commitment and capacity on behalf of the sponsoring body or agency to use the results in a constructive way. A recent example from the Engineering and Physical Science Research Council offers an illuminating case study. Here, a public dialogue on the potential applications of nanotechnology to medicine and healthcare was explicitly coupled to a decision about where to target a research funding initiative, providing valuable insights that had a significant impact on the decision.

The background to this is the development of a new approach to science funding at EPSRC. This is to fund “Grand Challenge” projects, which are large scale, goal-oriented interdisciplinary activities in areas of societal need. As part of the “Nanoscience – engineering through to application” cross council priority area, it was decided to launch a Grand Challenge in the area of applications of nanotechnology to healthcare and medicine. This is potentially very wide area, so it was felt necessary to narrow the scope of the programme somewhat. The definition of the scope was carried out with the advice of a “Strategic Advisory Team” – an advisory committee with about a dozen experts on nanotechnology, drawn from academia and industry, and including international representation. Inputs to the decision were sought through a wider consultation with academics and potential research “users”, defined here as clinicians and representatives of the pharmaceutical and healthcare industries. This consultation included a “Town Meeting” open to the research and user communities.

This represents a fairly standard approach to soliciting expert opinion for a decision about science funding priorities. In the light of the experience of public engagement in the context of nanotechnology, it would be a natural question to ask whether one should seek public views as well. EPSRC’s Societal Issues Panel – a committee providing high-level advice on the societal and ethical context for the research EPSRC supports – enthusiastically endorsed the proposal that a public engagement exercise on nanotechnology for medicine and healthcare should be commissioned as an explicit part of the consultation leading up to the decision on the scope of the Grand Challenge in nanotechnology for medicine and healthcare.

A public dialogue on nanotechnology for healthcare was accordingly carried out during the Spring of 2008 by BMRB, led by Darren Bhattachary. This took the form of a pair of reconvened workshops in each of four locations – London, Sheffield, Glasgow and Swansea. Each workshop involved 22 lay participants, with care taken to ensure a demographic balance. The workshops were informed by written materials, approved by an expert Steering Committee; there was expert participation in each workshop from both scientists and social scientists. Personnel from the Research Council also attended; this was felt by many participants to be very valuable as a signal of the seriousness with which the organisation took the exercise.

The dialogues produced a number of rich insights that proved very useful in defining the scope of the final call (its report can be found here). In general, there was very strong support for medicine and healthcare as a priority area for the application of nanotechnology, and explicit rejection of an unduly precautionary approach. On the other hand, there were concerns about who benefits from the expenditure of public funds on science, and about issues of risk and the governance of technology. One overarching theme that emerged was a strong preference for new technologies that were felt to empower people to take control of their own health and lives.

One advantage of connecting a public dialogue with a concrete issue of funding priorities is that some very specific potential applications of nanotechnology could be discussed. As a result of the consultation with academics, clinicians and industry representatives, six topics had been identified for consideration. In each case, people at the workshops could identify both positive and negative aspects, but overall some clear preferences emerged. The use of nanotechnology to permit the early diagnosis of disease received strong support, as it was felt that this would provide information that would enable people to make changes to the way they live. The promise of nanotechnology to help treat serious diseases with fewer side effects by more effective targeting of drugs was also received with enthusiasm. On the other hand, the idea of devices that combine the ability to diagnose a condition with the means to treat it, via releasing therapeutic agents, caused some disquiet as being potentially disempowering. Other potential applications of nanotechnology which was less highly prioritised were its use to control pathogens, for example through nanostructured surfaces with intrinsic anti-microbial or anti-viral properties, nanostructured materials to help facilitate regenerative medicine, and the use of nanotechnology to help develop new drugs.

It was always anticipated that the results of this public dialogue would be used in two ways. Their most obvious role was as an input to the final decision on the scope of the Grand Challenge call, together with the outcomes of the consultations with the expert communities. It was the nanotechnology Strategic Advisory Team that made the final recommendation about the call’s scope, and in the event their recommendation was that the call should be in the two areas most favoured in the public dialogue – nanotechnology for early diagnosis and nanotechnology for drug delivery. In addition to this immediate impact, there is an expectation that the projects that are funded through the Grand Challenge should be carried out in a way that reflects these findings.

Public engagement in an evolving science policy landscape

The current interest in public engagement takes place at a time when the science policy landscape is undergoing larger changes, both in the UK and elsewhere in the world. We are seeing considerable pressure from governments for publicly funded science to deliver clearer economic and societal benefits. There is a growing emphasis on goal-oriented, intrinsically interdisciplinary science, with an agenda set by a societal and economic context rather than by an academic discipline – “mode II knowledge production” – in the phrase of Gibbons and his co-workers in their book The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. The “linear model” of innovation – in which pure, academic, science, unconstrained by any issues of societal or economic context, is held to lead inexorably through applied science and technological development to new products and services and thus increased prosperity, is widely recognised to be simplistic at best, neglecting the many feedbacks and hybridisations at every stage of this process.

These newer conceptions of “technoscience” or “mode II science” lead to problems of their own. If the agenda of science is to be set by the demands of societal needs, it is important to ask who defines those needs. While it is easy to identify the location of expertise for narrowly constrained areas of science defined by well-established disciplinary boundaries, it is much less easy to see who has the expertise to define the technically possible in strongly multidisciplinary projects. And as the societal and economic context of research becomes more important in making decisions about science priorities, one could ask who it is who will subject the social theories of scientists to critical scrutiny. These are all issues which public engagement could be valuable in resolving.

The enthusiasm for involving the public more closely in decisions about science policy may not be universally shared, however. In some parts of the academic community, it may be perceived as an assault on academic autonomy. Indeed, in the current climate, with demands for science to have greater and more immediate economic impact, an insistence on more public involvement might be taken as part of a two-pronged assault on pure science values. There are some who consider public engagement more generally as incompatible with the principles of representative democracy – in this view the Science Minister is responsible for the science budget and he answers to Parliament, not to a small group of people in a citizens’ jury. Representatives of the traditional media might not always be sympathetic, either, as they might perceive it as their role to be the gatekeepers between the experts and the public. It is also clear that public engagement, done properly, is expensive and time-consuming.

Many of the scientists who have been involved with public engagement, however, have reported that the experience is very positive. In addition to being reminded of the generally high standing of scientists and the scientific enterprise in our society, they are prompted to re-examine unspoken assumptions and clarify their aims and objectives. There are strong arguments that public deliberation and interaction can lead to more robust science policy, particularly in areas that are intrinsically interdisciplinary and explicitly coupled to meeting societal goals. What will be interesting to consider as more experience is gained is whether embedding public engagement more closely in the scientific process actually helps to produce better science.