On pure science, applied science, and technology

It’s conventional wisdom that science is very different from technology, and that it makes sense to distinguish between pure science and applied science. Largely as a result of thinking about nanotechnology (as I discussed a few years ago here and here), I’m less confident any more that there’s such a clean break between science and technology, or, for that matter, pure and applied science.

Historians of science tell us that the origin of the distinction goes back to the ancient Greeks, who distinguished between episteme, which is probably best translated as natural philosophy, and techne, translated as craft. Our word technology derives from techne, but careful scholars remind us that technology actually refers to writing about craft, rather than doing the craft itself. They would prefer to call the actual business of making machines and gadgets technique (in the same way as the Germans call it technik), rather than technology. Of course, for a long time nobody wrote about technique at all, so there was in this literal sense no technology. Craft skills were regarded as secrets, to be handed down in person from master to apprentice, who were from a lower social class than the literate philosophers considering more weighty questions about the nature of reality.

The sixteenth century saw some light being thrown on the mysteries of technique with books (often beautifully illustrated) being published about topic like machines and metal mining. But one could argue that the biggest change came with the development of what was called then experimental philosophy, which we see now as being the beginnings of modern science. The experimental philosophers certainly had to engage with craftsman and instrument makers to do their experiments, but what was perhaps more important was the need to commit the experimental details to writing so that their counterparts and correspondents elsewhere in the country or elsewhere in Europe could reliably replicate the experiments. Complex pieces of scientific apparatus, like Robert Boyle’s airpump, certainly were some of the most advanced (and expensive) pieces of technology of the day. And, conversely, it’s no accident that James Watt, who more than anyone else made the industrial revolution possible with his improved steam engine, learned his engineering as an instrument maker at the University of Glasgow.

But surely there’s a difference between making a piece of experimental apparatus to help unravel the ultimate nature of reality, and making an engine to pump a mine out? In this view, the aim of science is to understand the ultimate fundamental nature of reality, while technology seeks merely to alter the world in some way, with its success being judged simply by whether it does its intended job. In actuality, the aspect of science as natural philosophy, with its claims to deep understanding of reality, has always coexisted with a much more instrumental type of science whose success is judged by the power over nature it gives us (Peter Dear’s book The Intelligibility of Nature is a fascinating reflection on the history of this dual character of science). Even the keenest defenders of science’s claim to make reliable truth-claims about the ultimate nature of reality – often resort to entirely instrumental arguments – “if you’re so sceptical about science”, they’ll ask a relativist or social constructionist, “why do you fly in airplanes or use antibiotics?”

It’s certainly true that different branches of science are, to a different degree, applicable to practical problems. But which science is an applied science and which is a pure science depends as much on what problems society, at a particular time and in a particular place, needs solving, as on the character of the science itself. In the sixteenth and seventeenth centuries astronomy was a strategic subject of huge importance to the growing naval powers of the time, and was one of the first recipients of large scale state funding. The late nineteenth and early twentieth centuries were the heyday of chemistry, with new discoveries in explosives, dyes and fertilizers making fortunes and transforming the world only a few years after their discoveries in the laboratory. A contrarian might even be tempted to say “a pure science is an applied science that has outlived its usefulness”.

Another way of seeing the problems of a supposed divide between pure science, applied science and technology is to ask what it is that scientists actually do in their working lives. A scientist building a detector for CERN or writing an image analysis program for some radio astronomy data may be doing the purest of pure science in terms of their goals – understanding particle physics or the distant universe – but what they’re actually doing day to day will look very similar indeed to their applied scientist counterparts designing medical imaging hardware or software for interpreting CCTV footage for the police. Of course, this is the origin of the argument that we should support pure science for the spin-offs it produces (such as the World Wide Web, as the particle physicists continually remind us). A counter-argument would say, why not simply get these scientists to work on medical imaging (say) in the first place, rather than trying to look for practical applications for the technologies they develop in support of their “pure” science? Possible answers to this might point to the fact that the brightest people are motivated to solve deep problems in a way that might not apply to more immediately practical issues, or that our economic system doesn’t provide reliable returns for the most advanced technology developed on a speculative basis.

If it was ever possible to think that pure science could exist as a separate province from the grubby world of application, like Hesse’s “The Glass Bead Game”, that illusion was shattered in the second world war. The purest of physicists delivered radar and the fission bomb, and in the cold war we emerged into it seemed that the final destiny of the world was going to be decided by the atomic physicists. In the west, the implications of this for science policy was set out by Vannevar Bush. Bush, an engineer and perhaps the pre-eminent science administrator of the war, set out the framework for government funding of science in the USA in his report “Science: the endless frontier”.

Bush’s report emphasised, not “pure” research, but “basic” research. The distinction between basic research and applied research was not to be understood in terms of whether it was useful or not, but in terms of the motivations of the people doing it. “Basic research is performed without thought of practical ends” – but those practical ends do, nonetheless, follow (albeit unpredictably), and it’s the job of applied research to fill in the gaps. It had in the past been possible for a country to make technological progress without generating its own basic science (as the USA did in the 19th century) but, Bush asserted, the modern situation was different, and “A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade”.

Bush thus left us with three ideas that form the core of the postwar consensus on science policy. The first was that basic research should be carried out in isolation from thoughts of potential use – that it should result from ” the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown”. The second was that, even though the scientists who produced this basic knowledge weren’t motivated by practical applications, these applications would follow, by a process in which potential applications were picked out and developed by applied scientists, and then converted into new products and processes by engineers and technologists. This one-way flow of ideas from science into application is what innovation theorists call the linear model of innovation. Bush’s third assertion was that a country that invested in basic science would recoup that investment through capturing the rewards from new technologies.

All three of these assertions have subsequently been extensively criticised, though the basic picture has a persistent hold on our thinking about science. Perhaps the most influential critique, from the science policy point of view, came in a book by Donald Stokes called Pasteur’s quadrant. Stokes argued from history that the separation of basic research from thoughts of potential use often didn’t happen; his key example was Louis Pasteur, who created a new field of microbiology in his quest to understand the spoilage of milk and the fermentation of wine. Rather than thinking about a linear continuum between pure and applied research, he thought in terms of two dimensions – the degree to which research was motivated by a quest for fundamental understanding, and the degree to which it was motivated by applications. Some research was driven solely by the quest for understanding, typified by Bohr, while an engineer like Edison typified a search for practical results untainted by any deeper curiosity. But, the example of Pasteur showed us that the two motivations could coexist. He suggested that research in this “Pasteur’s quadrant” – use-inspired basic research – should be a priority for public support.

Where are we now? The idea of Pasteur’s quadrant underlies the idea of “Grand Challenges” inspired by societal goals as an organising principle for publicly supported science. From innovation theory and science and technology studies come new terms and concepts, like technoscience, and Mode 2 knowledge production. One might imagine that nobody believes in the linear model anymore; it’s widely accepted that technology drives science as often as science drives technology. As David Willetts, the UK’s Science Minister, put it in a speech in July this year, “A very important stimulus for scientific advance is, quite simply, technology. We talk of scientific discovery enabling technical advance, but the process is much more inter-dependent than that.” But the linear model is still deeply ingrained in the way policy makers talk – in phrases like “technology readiness levels”, “pull-though to application”. From a more fundamental point of view, though, there is still a real difference between finding evidence to support a hypothesis and demonstrating that a gadget works. Intervening in nature is a different goal to understanding nature, even though the processes by which we achieve these goals are very much mixed up.