The thinktank Demos has released another report on science and public engagement. The Public Value of Science is, in some ways, a follow up to their earlier pamphlet See-through Science. But whereas the earlier report was rather confident in its diagnosis of the failings of previous attempts to engage the public in science, and in its prescription of a new type of “upstream engagement”, the new report seems much more uncertain in its tone.
On the face of it, this is odd, because the news seems good. There is no evidence of any growing crisis in public confidence in science; on the contrary, the report quotes a recent opinion poll from the UK which found that “86 per cent of people think science ‘makes a good contribution to society’– up 5 per cent on two years ago.”. And the idea of “upstream engagement” is riding high in fashionability, both in government and among the scientific great and good. Nonetheless, there seems to be a nagging worry, a sense that this conversion to real public engagement is only skin deep. It’s true that there’s been some open opposition (for example from Lord Taverne’s organisation, Sense about Science) but this seems to worry Demos less than the feeling that all the attention paid to public engagement still amounts to little more than lip-service, leading to “a well-meaning, professionalised and busy field,propelled along by its own conferences and reports, but never quite impinging on fundamental practices,assumptions and cultures. “
I think they are quite right. The danger they have identified is that all this activity about public engagement still isn’t actually pulling the levers they need to operate to achieve their ambition, which is to steer the direction of the research enterprise itself. The next phase is to work on what they call the “software” of scientific engagement – “the codes,values and norms that govern scientific practice,but which are far harder to access and change.” This is a much more difficult matter than simply setting up a few focus groups and citizens’ juries. In essence, their aim here is to use the input from this kind of deliberative process to redefine the way the scientific community defines “good science”.
This kind of cultural shift isn’t entirely unprecedented. In fact, I’ve argued myself that the rise of nanoscience itself constitutes just such a shift; in this case the definition of good science swung away from testing theories and characterising materials, and towards making widgets or gizmos. But the process of change is difficult, unpredictable and hard to control. It’s not about the Minister for Science issuing a rational order to his obedient research councils; the process is probably closer to the way fashions spread among sub-teenagers. The editors of Nature and Science, like the editors of Smash Hits, might think they have some influence, but they’re at the mercy of the social dynamics of the playground. One obvious difficulty is that the values of the scientific enterprise are now highly globalized. All over the world scientists aspire to publish the same kinds of paper in the same journals, and to be invited to the same conferences. Another difficulty is the sheer self-confidence of the scientific community. Lord Broers’ Reith lectures captured the spirit exactly – paraphrasing Marx, scientists may concede that philosophers and social scientists have done something to understand the world, but scientists and technologists have a deep conviction that it is they who have changed it.
Moving to some more parochial issues, the report identifies some specific barriers that UK scientific politics puts in the way of their vision. The Research Assessment Exercise, which determines the level of baseline research funding in UK universities over a five year period, operates on a strictly disciplinary basis, using peer review of papers describing original research. There’s been some lip-service paid to the notion that there may be valid outputs that aren’t papers in Physical Review Letters, but I’m not sure many people are going to be willing to gamble on this, and I can’t disagree with Demos’s conclusion that ‘”it reinforces the model ofthe highly specialised researcher,locked in a cycle of publish-or-perish”. The research councils clearly see some of the problems and are starting some useful initiatives, but they’re hampered by the difficulty that the different councils have in working cooperatively. The big picture, though, is that there are precious few career incentives for scientists to divert their efforts in this way, and quite a few significant disincentives.
The big weakness in the Demos analysis, in my view, is its failure to address the issue of the power of the market. The authors are very equivocal about the growing emphasis on the commercialisation of university generated research. Agreeing that in principle this is a good thing, they nonetheless report ” growing disquiet among university scientists that the drive for ever closer ties with business is distorting research priorities”, and worry about the effects of this on the openness and integrity of the research process. All these are valid concerns, but what’s missing is a recognition that the market is now the predominant mechanism by which technology impacts on society. Demos says “We believe everyone should be able to make personal choices in their daily lives that contribute to the common good. “ The truth is, the way society is set up now what people buy is one of the major ways in which these choices are made. And the messages that people send through the market by these personal choices might well differ from the messages they would send if you asked them directly. If you ask a bunch of young people where they would like to see money spent to develop nanotechnology, they might well answer that they’d like to see it being spent on improving the environment and on ending world poverty, but then if they go and spend their money on iPods and personal care products their votes are effectively cast for quite different priorities.
This isn’t to say that the market is a very efficient way of setting research priorities – far from it. At the moment we have marketing and product development people making more or less informed guesses (which often turn out to be spectacularly inaccurate) about what people are going to want to buy. On the other hand, researchers are obliged to try and predict some kind of application for the outcome of their research when they apply for funding, and to do this they end up trying to guess, not so much what the potential markets might be, but what they think will best match the preconceptions of referees and research councils. Somehow the idea that in ten years everyone will want flexible television sets, or personal gene testing kits, or neutriceutical laden yoghourts, enters and spreads through the collective mind of the research community like a Pokemon craze. This isn’t to say that these ideas are necessarily wrong; it’s just that the process by which they gain currency is not particularly well controlled or evidence based. It’s this sort of process that sociologists of science ought to understand, but I’m not convinced they do.