This essay was first published in the August 2008 issue of Nature Nanotechnology – Nature Nanotechnology 3 448 (2008) (subscription required for full online text).
A water-tight definition of nanotechnology still remains elusive, at least if we try and look at the problem from a scientific or technical basis. Perhaps this means we are looking in the wrong place, and we should instead seek a definition that’s essentially sociological. Here’s one candidate: “Nanotechnology is the application of mode 2 values to the physical sciences” . The jargon here is a reference to the influential 1994 book, “The New Production of Knowledge”, by Michael Gibbons and coworkers . Their argument was that the traditional way in which knowledge is generated – in a disciplinary framework, with a research agenda set from within that discipline – was being increasingly supplemented by a new type of knowledge production which they called “Mode 2”. In mode 2 knowledge production, they argue, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability. It’s easy to argue that the difference between nanotechnology research as it is developing in countries across the world, and the traditional disciplines from which it has emerged, like solid state physics and physical chemistry, very much fits this pattern.
Gibbons and his coauthors always argued that what they were doing was simply observing, in a neutral way, how things were moving. But it’s a short journey from “is” to “ought”, and many observers have seized on these observations as a prescription for how science should change. Governments seeking to extract demonstrable and rapid economic returns from their tax-payers’ investments in publicly funded science, and companies and entrepreneurs seeking more opportunities to make money from scientific discoveries, look to this model as a blueprint to reorganise science the better to deliver what they want. On the other hand, those arguing for a different relationship between science and society , with more democratic accountability and greater social reflexivity on the part of scientists, see these trends as an opportunity to erase the distinction between “pure” and “applied” science and to press all scientists to take much more explicit consideration of the social context in which they operate.
It’s not surprising that some scientists, for whom the traditional values of science as a source of disinterested and objective knowledge are precious, regard these arguments as assaults by the barbarians at the gates of science. Philip Moriarty made the opposing case very eloquently in an earlier article in Nature Nanotechnology (Reclaiming academia from post-academia, abstract, subscription required for full article) , arguing for the importance of “non-instrumental science” in the “post-academic world”. Here he follows John Ziman in contrasting instrumental science, directed to economic or political goals, with non-instrumental science, which, it is argued, has wider benefits to society in creating critical scenarios, promoting rational attitudes and developing a cadre of independent and enlightened experts.
What is striking, though, is that many of the key arguments made against instrumental science actually appeal to instrumental values. Thus, it is possible to make impressive lists of the discoveries made by undirected, investigator driven science that have led to major social and economic impacts – from the laser to the phenomenon of giant magneto-resistance that’s been so important in developing hard disk technology. This argument, then, is actually not an argument against the ends of instrumental science – it implicitly accepts that the development of economically or socially important products is an important outcome of science. Instead, it’s an argument about the best means by which instrumental science should be organised. The argument is not that science shouldn’t seek to produce direct impacts on society. It is that this goal can sometimes be more effectively, albeit unpredictably, reached by supporting gifted scientists as they follow their own instincts, rather than attempting to manage science from the top down. Likewise, arguments about the superiority of the “open-source” model of science, in which information is freely exchanged, to proprietary models of knowledge production in which intellectual property is closely guarded and its originators receive the maximum direct financial reward, don’t fundamentally question the proposition that science should lead to societal benefits, they simply question whether the proprietary model is the best way of delivering them. This argument was perhaps most pointed at the time of the sequencing of the human genome, where a public sector, open source project was in direct competition with Craig Venter’s private sector venture. Defenders of the public sector project, notably Sir John Sulston, were eloquent and idealistic in its defense, but what their argument rested on was the conviction that the societal benefits of this research would be maximised if its results remained in the public domain.
The key argument of the mode 2 proponents is that science needs to be recontextualised – placed into a closer relationship with society. But, how this is done is essentially a political question. For those who believe that market mechanisms offer the most reliable way by which societal needs are prioritised and met, the development of more effective paths from science to money-making products and services will be a necessary and sufficient condition for making science more closely aligned with society. But those who are less convinced by such market fundamentalism will seek other mechanisms, such as public engagement and direct government action, to create this alignment.
Perhaps the most telling criticism of the “Mode 2” concept is the suggestion that, rather than being a recent development, science being carried out in the context of application has actually been the rule for most of its history. Certainly, in the nineteenth century, there was a close, two-way interplay between science and application, well seen, for example, in the relationship between the developing science of thermodynamics and the introduction of new and refined heat engines. In this view, the idea of science as being autonomous and discpline-based is an anomaly of the peculiar conditions of the second half of the twentieth century, driven by the expansion of higher education, the technology race of the cold war and the reflected glory of science’s contribution to allied victory in the second world war.
At each point in history, then, the relationship between science and the society ends up being renegotiated according the perceived demands of the time. What is clear, though, is that right from the beginning of modern science there has been this tension about its purpose. After all, for one of the founders of the ideology of the modern scientific project, Francis Bacon, its aims were “an improvement in man’s estate and an enlargement of his power over nature”. These are goals that, I think, many nanotechnologists would still sign up to.