Software control of matter at the atomic and molecular scale

The UK’s physical sciences research council, the EPSRC, has just issued a call for an “ideas factory” with the theme “Software control of matter at the atomic and molecular scale”, a topic proposed by Nottingham University nanophysicist Philip Moriarty. The way these programs work is that 20-30 participants, selected from many different disciplines, spend a week trying to think through new and innovative approaches to a very challenging problem. At the end of the process, it is hoped that some definite research proposals will emerge, and £1.5 million (i.e. not far short of US$ 3 million) has been set aside to fund these. The challenge, as defined by the call, is as follows:

“Can we design and construct a device or scheme that can arrange atoms or molecules according to an arbitrary, user-defined blueprint? This is at the heart of the idea of the software control of matter – the creation, perhaps, of a “matter compiler” which will interpret software instructions to output a macroscopic product in which every atom is precisely placed. Even partial progress towards this goal would significantly open up the range of available functional materials, permitting meta-materials with interesting electronic, optoelectronic, optical and magnetic properties.

One route to this goal might be to take inspiration from 3-d rapid prototyping devices, and conceive of some kind of pick-and-place mechanism operating at the atomic or molecular level, perhaps based on scanning probe techniques. On the other hand, the field of DNA nanotechnology gives us examples of complex structures built by self- assembly, in which the program to guide the construction is implicit within the structure of the building blocks themselves. This problem, then, goes beyond surface chemistry and the physics of self-assembly to some fundamental questions in computer science.

This ideas factory should attract surface physicists and chemists, including specialists in scanning probe and nanorobotic techniques, and those with an interest in self-assembling systems. Theoretical chemists, developmental biologists, and computer scientists, for example those interested in agent-based and evolutionary computing methods and emergent behaviour, will also be able to contribute. “

I’d encourage anyone who is eligible to receive EPSRC research funding (i.e. scientists working in UK universities and research institutes, broadly speaking) who is interested in taking part in this event to apply using the form on the EPSRC website. One person who won’t be getting any funding from this is me, because I’ve accepted the post of director of the activity.

Two forthcoming books

I’ve recently been looking over the page proofs of two interesting popular science books which are due to be published soon, both on subjects close to my heart. “The Middle World – the Restless Heart of Reality” by Mark Haw, is a discursive, largely historical book about Brownian motion. Of all the branches of physics, statistical mechanics is the one that is least well known in the wider world, but its story has both intellectual fascination and real human interest. The phenomenon of Brownian motion is central to understanding the way biology works, and indeed, as I’ve argued at length here and in my own book, learning how to deal with it and how to exploit it is going to be a prerequisite for success in making nanoscale machines and devices. Mark’s book does a nice job of bringing together the historical story, the relevance of Brownian motion to current science in areas like biophysics and soft matter physics, and its future importance in nanotechnology.

Martyn Amos (who blogs here), has a book called “Genesis Machines: The New Science of Biocomputing” coming out soon. Here the theme is the emerging interaction between computing and biology. This interaction takes a number of forms; the bulk of the book concerns Martyn’s own speciality, the various ways in which the biomolecule DNA can be used to do computations, but this leads on to synthetic biology and the re-engineering of the computing systems of individual cells. To me this is perhaps the most fascinating and potentially important area of science there is at the moment, and this book is an excellent introduction.

Neither book is out yet, but both can be preordered: The Middle World – the Restless Heart of Reality from Amazon, and Genesis Machines – the New Science of Biocomputation from Amazon UK.

ETC makes the case against nanomedicine

The most vocal and unequivocal opponent of nanotechnology – the ETC group – has turned its attention to nanomedicine, with a new report Nanotech Rx taking a sceptical look at the recent shift of emphasis we’ve seen towards medical applications of nanotechnology. The report, though, makes more sense as a critique of modern medicine in general rather than making many specific points about nanotechnology. Particularly in the context of health in the third world, the main thrust of the case is that enthusiasts of technocentric medicine have systematically underplayed the importance of non-technological factors (hygiene, better food, etc) on improving general health. As they say, “the global health crisis doesn’t stem from a lack of science innovation or medical technologies; the root problem is poverty and inequality. New medical technologies are irrelevant for poor people if they aren’t accessible or affordable.” However, in an important advance from ETC’s previous blanket opposition to nanotechnology, they do concede that “nanotech R&D related to water is potentially significant for the developing world. Access to clean water could make a greater contribution to global health than any single medical intervention.”

The debate about human enhancement also gets substantial discussion, with a point of view strongly influenced by disability rights activist Gregor Wolbring. (Newcomers to this debate could do a lot worse than to start with the recent Demos pamphlet, Better Humans? which collects essays by those from a variety of points of view, including Wolbring himself.) ETC correctly identifies the crypto-transhumanist position taken in some recent government publications, and gets succinctly to the nub of the matter as follows: “Certain personality traits (e.g., shyness), physical traits (e.g., “average” strength or height), cognitive traits (e.g., “normal” intelligence) will be deemed undesirable and correctable (and gradually unacceptable, not to be tolerated). The line between enhancement and therapy – already blurry – will be completely obliterated. “ I agree that there’s a lot to be concerned about here, but the issue as it now stands doesn’t have a lot to do with nanotechnology – current points of controversy include the use of SSRIs to “treat” shyness, and modafinil to allow soldiers to go without sleep. However, in the future nanotechnology certainly will be increasingly important in permitting human enhancement, in areas such as the development of interfaces with the brain and in regenerative medicine, and so it’s not unreasonable to flag the area as one to watch.

Naturally, the evils of big pharma get a lot of play. There are the well publicised difficulties big pharma seems to have in maintaining their accustomed level of innovation, the large marketing budgets and the concentration on “me-too” drugs for the ailments of the rich west, and the increasing trend to outsource clinical trials to third world countries. Again, these are all very valid concerns, but they don’t seem to have a great deal of direct relevance to nanotechnology.

In the context of the third world, one of the most telling criticisms of the global pharmaceutical industry has been the lack of R&D spend on diseases that affect the poor. Things have recently changed greatly for the better, thanks to Bill and Melinda and their ilk. ETC recognise the importance of public private partnerships of the kind supported by organisations like the Bill and Melinda Gates foundation, despite some evident distaste that this money has come from the disproportionately rich. “Ten years ago, there was not a single PPP devoted to the development of “orphan drugs” – medicines to treat diseases with little or no financial profit potential – and today there are more than 63 drug development projects aimed at diseases prevalent in the global South.” As an example of a Bill and Melinda supported project, ETC quote a project to develop a new synthetic route to the anti-malarial agent artemisinin. This is problematic for ETC, as the project uses synthetic biology, to which ETC is instinctively opposed; yet since artemisinin-based combination treatments seem to be the only effective way of overcoming the problem of drug resistant malaria, it seems difficult to argue that these treatments shouldn’t be universally available.

The sections of the report that are directly concerned with those areas of nanomedicine that are currently receiving the most emphasis seem rather weak. The section on the use of nanotechnology for drug delivery section discusses only one example, a long way from the clinic, and doesn’t really make any comments at all on the current big drive to develop new anti-cancer therapies based on nanotechnology. I’m also surprised that ETC don’t talk more about the current hopes for the widespread application of nanotechnology in diagnostics and sensor devices, not least because this raises some important issues about the degree to which diagnosis can be simply equated to the presence or absence of some biochemical marker.

At the end of all this, ETC are still maintaining their demand for a “moratorium on nanotechnology”, though this seems at odds with statements like this: “Nanotech R&D devoted to safe water and sustainable energy could be a more effective investment to address fundamental health issues.” I actually find more to agree with in this report than in previous ETC reports. And yet I’m left with the feeling that, even more than in previous reports, ETC has not managed to get to the essence of what makes nanotechnology special.

Is nanoscience different from nanotechnology?

In definitions of nanotechnology, it has now become conventional to distinguish between nanoscience and nanotechnology. One definition that is now very widely used is the one introduced by the 2004 Royal Society report, which defined these terms thus:

“Nanoscience is the study of phenomena and manipulation of materials at atomic, molecular and macromolecular scales, where properties differ significantly from those at a larger scale. Nanotechnologies are the design, characterisation, production and application of structures, devices and systems by controlling shape and size at nanometre scale.”

This echoed the definitions introduced earlier in the 2003 ESRC report, Social and Economic Challenges of Nanotechnology (PDF), which I coauthored, in which we wrote:

“We should distinguish between nanoscience, which is here now and flourishing, and nanotechnology, which is still in its infancy. Nanoscience is a convergence of physics, chemistry, materials science and biology, which deals with the manipulation and characterisation of matter on length scales between the molecular and the micron-size. Nanotechnology is an emerging engineering discipline that applies methods from nanoscience to create usable, marketable, and economically viable products.”

And this formulation itself was certainly derivative; I was certainly strongly influenced at the time by a very similar formulation from George Whitesides.

Despite having played a part in propagating this conventional wisdom, I’m now beginning to wonder how valid or helpful the distinction between nanoscience and nanotechnology actually is. Increasingly, it seems to me that the distinction tends to presuppose a linear model of technology transfer. In this picture, which was very widely held in post-war science policy discussions, we imagine a simple progression from fundamental research, predominantly curiosity driven, through a process of applied research, by which possible applications of the knowledge derived from fundamental science are explored, to the technological development of these applications into products or industrial processes. What’s wrong with this picture is that it doesn’t really describe how innovations in the history of technology have actually occurred. In many cases, inventions have been put into use well before the science that explains how they work was developed (the steam engine being one of many examples of this), and in many others it is actually the technology that has facilitated the science.

Meanwhile, the way science and technology is organised has greatly changed from the situation of the 1950’s, 60’s and ’70’s. At that time, a central role both in the generation of pure science and in its commercialisation was played by the great corporate laboratories, like AT&T’s Bell Labs in the USA, and in the UK the central laboratories of companies like ICI and GEC. For better or worse, these corporate labs have disappeared or been reduced to shadows of their former size, as deregulation and global competition has stripped away the monopoly rents that ultimately financed them. Without the corporate laboratories to broker the process of taking innovation from the laboratory to the factory, we are left with a much more fluid and confusing situation, in which there’s much more pressure on universities to move beyond pure science to find applications for their research and to convert this research into intellectual property to provide future revenue streams. Small research-based companies begin whose main assets are their intellectual property and the knowledge of their researchers, and bigger companies talk about “open innovation”, in which invention is just another function to be outsourced.

A useful concept for understanding the limitations of the linear model in this new environment is the idea of “mode II knowledge production” , (introduced, I believe, by Gibbons, M, et al (1994) The New Production of Knowledge. London: Sage). Mode II science would be fundamentally interdisciplinary, and motivated explicitly by applications rather than by the traditional discipline-based criteria of academic interest. These applications don’t necessarily have to be immediately convertible into something marketable; the distinction is that in this kind of science one is motivated not by exploring or explaining some fundamental phenomenon, but by the drive to make some device or gadget that does something interesting (nano-gizmology, as I’ve called this phenomenon in the past).

So in this view, nanotechnology isn’t simply the application of nanoscience. It’s definition is as much sociological as scientific. Prompted, perhaps, by observing the material success of many academic biologists who’ve founded companies in the biotech sector, and motivated by changes in academic funding climates and the wider research environment, we’ve seen physicists, chemists and materials scientists taking a much more aggressively application driven and commercially oriented approach to their science. Or to put it another way, nanotechnology is simply the natural outcome of an outbreak of biology envy amongst physical scientists.

Nanotechnology in the UK – judging the government’s performance

The Royal Society report on nanotechnology – Nanoscience and nanotechnologies: opportunities and uncertainties – was published in 2004, and the government responded to its recommendations early in 2005. At that time, many people were disappointed by the government response (see my commentary here); now the time has come to judge whether the government is meeting its commitments. The body that is going to make that judgement is the Council for Science and Technology. This is the government’s highest level advisory committee, reporting directly to the Prime Minister. The CST Nanotechnology Review is now underway, with a public call for evidence now open. Yesterday I attended a seminar in London organised by the working party.

I’ve written already of my disappointment with the government response so far, for example here, so you might think that I’d be confident that this review would be rather critical of the government. However, close reading of the call for evidence reveals a fine piece of “Yes Minister” style legerdemain; the review will judge, not whether the government’s response to the Royal Society report was itself adequate, but solely whether the government had met the commitments it made in that response.

One of the main purposes of yesterday’s seminar was to see if there had been any major new developments in nanotechnology since the publication of the Royal Society report. Some people expressed surprise at how rapid the introduction of nanotechnology into consumer products had been, though as ever it is difficult to judge how many of these applications can truly be described as nanotechnology, and equally how many other applications are in the market which do involve nanotechnology, but which don’t advertise the fact. However, one area in which there has been a demonstrable and striking proliferation is in nanotechnology road-maps, of which there are now, apparently, a total of seventy six.