How to think about science studies

I’ve been passing my driving time recently listening to the podcasts of an excellent series from the Canadian Broadcasting Corporation, called How to think about science. It’s simply a series of long interviews with academics, generally from the field of science studies. I’ve particularly enjoyed the interviews with historian of science Simon Schaffer, sociologists Ulrich Beck and Brian Wynne, science studies guru Bruno Latour, and Evelyn Fox Keller, who has written some interesting books about some of the tacit philosophies underlying modern biology. With one or two exceptions, even those interviews with people I find less convincing still provided me with a few thought provoking insights .

That strange academic interlude, the “science wars”, gets the occasional mention – this was the time when claims from science studies about the importance of social factors in the construction of scientific knowledge provoked a fierce counter-attack from people anxious to defend science against what they saw as an attack on its claims to objective truth. My perception is that the science wars ended in an armistice, though there are undoubtedly some people still holding out in the jungle, unaware that the war is over. Although the series is clearly presented from the science studies side of the argument, most contributors reflect the terms of the peace treaty, accepting the claims of science to be a way of generating perhaps uniquely reliable knowledge, while still insisting on the importance of the social in the way that knowledge is constructed, and criticising inappropriate ways of using scientific or pseudo-scientific arguments, models and metaphors in public discourse.

USA lagging Europe in nanotechnology risk research

How much resource is being devoted to assessing the potential risks of the nanotechnologies that are currently at or close to market? Not nearly enough, say campaigning groups, while governments, on the other hand, release impressive sounding figures for their research spend. Most recently, the USA’s National Nanotechnology Initiative has estimated its 2006 spend on nano-safety research as $68 million, which sounds very impressive. However, according to Andrew Maynard, a leading nano-risk researcher based at the Woodrow Wilson Center in Washington DC, we shouldn’t take this figure at face value.

Maynard comments on the figure on the SafeNano blog, referring to an analysis recently done by him and described in a news release from the Woodrow Wilson Center’s Project on Emerging Nanotechnologies. It seems that this figure is obtained by adding up all sorts of basic nanotechnology research, some of which might have only tangential relevance to problems of risk. If one applies a tighter definition of research that is either highly relevant to nanotechnology risk – such as a direct toxicology study – or substantially relevant -such as a study of the fate in the body of medical nanoparticles – it seems that the numbers fall substantially. Only $13 million of the $68 million was highly relevant to nanotechnology risk, with this number increasing to $29 million if the substantially relevant category is included too. This compares unfavourably with European spending, which amounts to $24 million in the highly relevant category alone.

Of course, it isn’t the headline figure that matters; what’s important is whether the research is relevant to the actual and potential risks that are out there. The Project on Emerging Nanotechnologies has done a great service by compiling an international inventory of nanotechnology risk research which allows one to see clearly just what sort of risk research is being funded across the world. It’s clear from this that suggestions that nanotechnology is being commercialised with no risk research at all being done are wide of the mark; what requires further analysis is whether all the right research is being done.

Molecular scale electronics from graphene

The remarkable electronic properties of graphene – single, one-atom thick, sheets of graphite – are highlighted in a paper in this weeks Science magazine, which demonstrates field-effect transistors exploiting quantum dots as small as 10 nm carved out of graphene. The paper is by Manchester University’s Andre Geim, the original discoverer of graphene, together with Kostya Novoselov and other coworkers (only the abstract is available without subscription from the Science website, but the full paper is available from Geim’s website(PDF)).

A quantum dot is simply a nanoscale speck of a conducting or semiconducting material that is small enough that the electrons, behaving as quantum particles, behave in a different way because of the way in which they are confined. What makes graphene different and interesting is the unusual behaviour the electrons show in this material to start with – as explained in this earlier post, electrons in graphene behave as if they were mass-less, ultra-relativistic particles. For relatively large quantum dots (greater than 100 nm), the behaviour is similar to other quantum dot devices; the device behaves like a so-called single electron transistor, in which the conductance of the device shows distinct peaks with voltage, reflecting the fact that current is carried in whole numbers of electrons, a phenomenon called Coulomb blockade. It’s at sizes less than 100 nm that the behaviour becomes really interesting – on these size scales quantum confinement is becoming important, but rather than producing an ordered series of permitted energy states, as one would expect for normal electrons, they see behaviour characteristic of quantum chaos. Pushing the size down even further, the techniques being used give less control over the precise shape of the quantum dots that are made, and their behaviour becomes less predictable and less reproducible. Nonetheless, even down to sizes of a few nanometers, they see the clean switching behaviour that could make these useful electronic devices.

For more context, see this Commentary in Science (subscription required), and this BBC news story.

Graphene based quantum dots (A. Geim, Manchester University)
Left: Scanning electron micrograph of a single-electron transistor based on a graphene quantum dot. Right: Schematic of a hypothetical transistor based on a very small graphene quantum dot. A.K. Geim, University of Manchester, from Science 320 p324 (2008)

Watching an assembler at work

The only software-controlled molecular assembler we know about is the ribosome – the biological machine that reads the sequence of bases on a strand of messenger RNA, and, converting this genetic code into a sequence of amino acids, synthesises the protein molecule that corresponds to the gene whose information was transferred by the RNA. An article in this week’s Nature (abstract, subscription required for full paper, see also this editor’s summary) describes a remarkable experimental study of the way the RNA molecule is pulled through the ribosome as each step of its code is read and executed. This experimental tour-de-force of single molecule biophysics, whose first author is Jin-Der Wen, comes from the groups of Ignacio Tinoco and Carlos Bustamante at Berkeley.

The experiment starts by tethering a strand of RNA between two micron-size polystyrene beads. One bead is held firm on a micropipette, while the other bead is held in an optical trap – the point at which a highly focused laser beam has its maximum intensity. The central part of the RNA molecule is twisted into a single hairpin, and the ribosome binds to the RNA just to one side of this hairpin. As the ribosome reads the RNA molecule, it pulls the hairpin apart, and the resulting lengthening of the RNA strand is directly measured from the change in position of the anchoring bead in its optical trap. What’s seen is a series of steps – the ribosome moves about 2.7 nm in about a tenth of a second, then pauses for a couple of seconds before making another step.

This distance corresponds exactly to the size of the triplet of bases that represent a single character of the genetic code – the codon. What we are seeing, then, is the ribosome pausing on a codon to read it, before pulling the tape through to read the next character. What we don’t see in this experiment, though we know it’s happening, is the addition of a single amino acid to the growing protein chain during this read step. This takes place by means of the binding to RNA codon, within the ribosome, of a shorter strand of RNA – the transfer RNA – to which the amino acid is attached. What the experiment does make clear that the operation of this machine is by no means mechanical and regular. The times taken for the ribosome to move from the reading position for one codon to the next – the translocation times – are fairly tightly distributed around an average value of around 0.08 seconds, but the dwell times on each codon vary from a fraction of a second up to a few seconds. Occasionally the ribosome stops entirely for a few minutes.

This experiment is far from the final word on the way ribosomes operate. I can imagine, for example, that people are going to be making strenuous efforts to attach a probe directly to the ribosome, rather than, as was done here, inferring its motion from the location of the end of the RNA strand. But it’s fascinating to have such a direct probe of one of the most central operations of biology. And for those attempting the very ambitious task of creating a synthetic analogue of a ribosome, these insights will be invaluable.

Leading nanotechnologist gets top UK defense science job

It was announced yesterday that the new Chief Scientific Advisor to the UK’s Ministry of Defense is to be Professor Mark Welland. Mark Welland is currently Professor of Nanotechnology and the head of Cambridge University’s Nanoscience Centre. He is one of the pioneers of nanotechnology in the UK; he was, I believe, the first person in the country to build a scanning probe microscope. Most recently he has been in the news for his work with the mobile phone company Nokia, who recently unveiled their Morph concept phone at an exhibition at New York’s Museum of Modern Art, Design and the Elastic Mind.

How can nanotechnology help solve the world’s water problems?

The lack of availability of clean water to many of the world’s population currently leads to suffering and premature death for millions of people, and as population pressures increase, climate change starts to bite, and food supplies become tighter (perhaps exacerbated by an ill-considered move to biofuels) these problems will only intensify. It’s possible that nanotechnology may be able to contribute to solving these problems (see this earlier post, for example). A couple of weeks ago, Nature magazine ran a special issue on water, which included a very helpful review article: Science and technology for water purification in the coming decades. This article (which seems to be available without subscription) is all the more helpful for not focusing specifically on nanotechnology, instead making it clear where nanotechnology could fit into other existing technologies to create affordable and workable solutions.

One sometimes hears the criticism that there’s no point worrying about the promise of new nanotechnological solutions, when workable solutions are already known but aren’t being implemented, for political or economic reasons. That’s an argument that’s not without force, but the authors do begin to address it, by outlining what’s wrong with existing technical solutions. “These treatment methods are often chemically, energetically and operationally intensive, focused on large systems, and thus require considerable infusion of capital, engineering expertise and infrastructure” Thus we should be looking for decentralised solutions, that can be easily, reliably and cheaply installed using local expertise and preferably without the need for large scale industrial infrastructure.

To start with the problem of the sterilisation of water to kill pathogens, traditional methods start with chlorine. This isn’t ideal, as some pathogens are remarkably tolerant of it, and it can lead to toxic by-products. Ultra-violet sterilisation, on the other hand, offers a lot of promise – it’s good for bacteria, though less effective for viruses. But in combination with photocatalytic surfaces of titanium dioxide nanoparticles it could be very effective. Here what is required is either much cheaper sources of ultraviolet light, (which could come from new nanostructured semiconductor light emitting diodes) or new types of nanoparticles with surfaces excited by lower wavelength light, including sunlight.

Another problem is the removal of contamination by toxic chemicals, which can arise either naturally or through pollution. Problem contaminants include heavy metals, arsenic, pesticide residues, and endocrine disrupters; the difficulty is that these can have dangerous effects even at rather low concentrations, which can’t be detected without expensive laboratory-based analysis equipment. Here methods for robust, low cost chemical sensing would be very useful – perhaps a combination of molecular recognition elements integrated in nanofluidic devices could do the job.

The reuse of waste water offers hard problems because of the high content organic matter that needs to be removed, in addition to the removal of other contaminants. Membrane bioreactors combine the use of the sorts of microbes that are exploited in activated sludge processes of conventional sewage treatment with ultrafiltration through a membrane to get faster throughputs of waste water. The tighter the pores in this sort of membrane, the more effective it is at removing suspended material, but the problem is that this sort of membrane quickly gets blocked up. One solution is to line the micro- and nano- pores of the membranes with a single layer of hairy molecules – one of the paper’s co-authors, MIT’s Anne Mayes, developed a particularly elegant scheme for doing this exploiting self-assembly of comb-shaped copolymers.

Of course, most of the water in the world is salty (97.5%, to be precise), so the ultimate solution to water shortages is desalination. Desalination costs energy – necessarily so, as the second law of thermodynamics puts a lower limit on the cost of separating pure water from the higher entropy solution state. This theoretical limit is 0.7 kWh per cubic meter, and to date the most efficient practical process uses a not at all unreasonable 4 kWh per cubic meter. Achieving these figures, and pushing them down further, is a matter of membrane engineering, achieving precisely nanostructured pores that resist fouling and yet are mechanically and chemically robust.