Aliens from inner space? The strange story of the “nanobacteria” that probably weren’t.

How small are the smallest living organisms? There seem to be many types of bacteria of 300 nm and upwards in diameter, but to many microbiologists it seems a rule of thumb that if something can get through a 0.2 µm filter (200 nm) it isn’t alive. Thus the discovery of so-called “nanobacteria”, with sizes between 50 nm and 200 nm, in the human blood-stream, and their putative association with a growing number of pathological conditions such as kidney stones and coronary artery disease, has been controversial. Finnish scientist Olavi Kajander, the discoverer of “nanobacteria”, presents the evidence that these objects are a hitherto undiscovered form of bacterial life in a contribution to a 1999 National Academies workshop on the size limits on very small organisms. But two recent papers give strong evidence that “nanobacteria” are simply naturally formed inorganic nanoparticles.

In the first of these papers, Nanobacteria Are Mineralo Fetuin Complexes, in the February 2008 issue of PLoS Pathogens, Didier Raoult, Patricio Renesto and their coworkers from Marseilles report a comprehensive analysis of “nanobacteria” cultured in calf serum. Their results show that “nanobacteria” are nanoparticles, predominantly of the mineral hydroxyapatite, associated with proteins, particularly a serum protein called fetuin. Crucially, though, they failed to find definitive evidence that the “nanobacteria” contained any DNA. In the absence of DNA, these objects cannot be bacteria. Instead, these authors say they are “self-propagating mineral-fetuin complexes that we propose to call “nanons.””

A more recent article, in the April 8 2008 edition of PNAS, Purported nanobacteria in human blood as calcium carbonate nanoparticles (abstract, subscription required for full article), casts further doubt on the nanobacteria hypothesis. These authors, Jan Martel and John Ding-E Young, from Chang Gung University in Taiwan and Rockefeller University, claim to be able to reproduce nanoparticles indistinguishable from “nanobacteria” simply by combining chemicals which precipitate calcium carbonate – chalk – in cell culture medium. Some added human serum is needed in the medium, suggesting that blood proteins are required to produce the characteristic “nanobacteria” morphology rather than a more conventional crystal form.

So, it seems the case is closed… “nanobacteria” are nothing more than naturally occurring, inorganic nanoparticles, in which the precipitation and growth of simple inorganic compounds such as calcium carbonate is modified by the adsorption of biomolecules at the growing surfaces to give particles with the appearance of very small single celled organisms. These natural nanoparticles may or may not have relevance to some human diseases. This conclusion does leave a more general question in my mind, though. It’s clear that the presence of nucleic acids is a powerful way of detecting hitherto unknown microorganisms, and the absence of nucleic acids here is powerful evidence that these nanoparticles are not in fact bacteria. But it’s possible to imagine a system that is alive, at least by some definitions, that has a system of replication that does not depend on DNA at all. Graham Cairns-Smith’s book Seven Clues to the Origin to Life offers some thought provoking possibilities for systems of this kind as precursors to life on earth, and exobiologists have contemplated the possibility of non-DNA based life on other planets. If some kind of primitive life without DNA, perhaps based on some kind of organic/inorganic hybrid system akin to Cairns-Smith’s proposal, did exist on earth today, we would be quite hard-pressed to detect it. I make no claim that these “nanobacteria” represent such a system, but the long controversy over their true nature does make it clear that deciding whether a system is being living or abiotic in the absence of evidence from nucleic acids could be quite difficult.

How to think about science studies

I’ve been passing my driving time recently listening to the podcasts of an excellent series from the Canadian Broadcasting Corporation, called How to think about science. It’s simply a series of long interviews with academics, generally from the field of science studies. I’ve particularly enjoyed the interviews with historian of science Simon Schaffer, sociologists Ulrich Beck and Brian Wynne, science studies guru Bruno Latour, and Evelyn Fox Keller, who has written some interesting books about some of the tacit philosophies underlying modern biology. With one or two exceptions, even those interviews with people I find less convincing still provided me with a few thought provoking insights .

That strange academic interlude, the “science wars”, gets the occasional mention – this was the time when claims from science studies about the importance of social factors in the construction of scientific knowledge provoked a fierce counter-attack from people anxious to defend science against what they saw as an attack on its claims to objective truth. My perception is that the science wars ended in an armistice, though there are undoubtedly some people still holding out in the jungle, unaware that the war is over. Although the series is clearly presented from the science studies side of the argument, most contributors reflect the terms of the peace treaty, accepting the claims of science to be a way of generating perhaps uniquely reliable knowledge, while still insisting on the importance of the social in the way that knowledge is constructed, and criticising inappropriate ways of using scientific or pseudo-scientific arguments, models and metaphors in public discourse.

USA lagging Europe in nanotechnology risk research

How much resource is being devoted to assessing the potential risks of the nanotechnologies that are currently at or close to market? Not nearly enough, say campaigning groups, while governments, on the other hand, release impressive sounding figures for their research spend. Most recently, the USA’s National Nanotechnology Initiative has estimated its 2006 spend on nano-safety research as $68 million, which sounds very impressive. However, according to Andrew Maynard, a leading nano-risk researcher based at the Woodrow Wilson Center in Washington DC, we shouldn’t take this figure at face value.

Maynard comments on the figure on the SafeNano blog, referring to an analysis recently done by him and described in a news release from the Woodrow Wilson Center’s Project on Emerging Nanotechnologies. It seems that this figure is obtained by adding up all sorts of basic nanotechnology research, some of which might have only tangential relevance to problems of risk. If one applies a tighter definition of research that is either highly relevant to nanotechnology risk – such as a direct toxicology study – or substantially relevant -such as a study of the fate in the body of medical nanoparticles – it seems that the numbers fall substantially. Only $13 million of the $68 million was highly relevant to nanotechnology risk, with this number increasing to $29 million if the substantially relevant category is included too. This compares unfavourably with European spending, which amounts to $24 million in the highly relevant category alone.

Of course, it isn’t the headline figure that matters; what’s important is whether the research is relevant to the actual and potential risks that are out there. The Project on Emerging Nanotechnologies has done a great service by compiling an international inventory of nanotechnology risk research which allows one to see clearly just what sort of risk research is being funded across the world. It’s clear from this that suggestions that nanotechnology is being commercialised with no risk research at all being done are wide of the mark; what requires further analysis is whether all the right research is being done.

Molecular scale electronics from graphene

The remarkable electronic properties of graphene – single, one-atom thick, sheets of graphite – are highlighted in a paper in this weeks Science magazine, which demonstrates field-effect transistors exploiting quantum dots as small as 10 nm carved out of graphene. The paper is by Manchester University’s Andre Geim, the original discoverer of graphene, together with Kostya Novoselov and other coworkers (only the abstract is available without subscription from the Science website, but the full paper is available from Geim’s website(PDF)).

A quantum dot is simply a nanoscale speck of a conducting or semiconducting material that is small enough that the electrons, behaving as quantum particles, behave in a different way because of the way in which they are confined. What makes graphene different and interesting is the unusual behaviour the electrons show in this material to start with – as explained in this earlier post, electrons in graphene behave as if they were mass-less, ultra-relativistic particles. For relatively large quantum dots (greater than 100 nm), the behaviour is similar to other quantum dot devices; the device behaves like a so-called single electron transistor, in which the conductance of the device shows distinct peaks with voltage, reflecting the fact that current is carried in whole numbers of electrons, a phenomenon called Coulomb blockade. It’s at sizes less than 100 nm that the behaviour becomes really interesting – on these size scales quantum confinement is becoming important, but rather than producing an ordered series of permitted energy states, as one would expect for normal electrons, they see behaviour characteristic of quantum chaos. Pushing the size down even further, the techniques being used give less control over the precise shape of the quantum dots that are made, and their behaviour becomes less predictable and less reproducible. Nonetheless, even down to sizes of a few nanometers, they see the clean switching behaviour that could make these useful electronic devices.

For more context, see this Commentary in Science (subscription required), and this BBC news story.

Graphene based quantum dots (A. Geim, Manchester University)
Left: Scanning electron micrograph of a single-electron transistor based on a graphene quantum dot. Right: Schematic of a hypothetical transistor based on a very small graphene quantum dot. A.K. Geim, University of Manchester, from Science 320 p324 (2008)

Watching an assembler at work

The only software-controlled molecular assembler we know about is the ribosome – the biological machine that reads the sequence of bases on a strand of messenger RNA, and, converting this genetic code into a sequence of amino acids, synthesises the protein molecule that corresponds to the gene whose information was transferred by the RNA. An article in this week’s Nature (abstract, subscription required for full paper, see also this editor’s summary) describes a remarkable experimental study of the way the RNA molecule is pulled through the ribosome as each step of its code is read and executed. This experimental tour-de-force of single molecule biophysics, whose first author is Jin-Der Wen, comes from the groups of Ignacio Tinoco and Carlos Bustamante at Berkeley.

The experiment starts by tethering a strand of RNA between two micron-size polystyrene beads. One bead is held firm on a micropipette, while the other bead is held in an optical trap – the point at which a highly focused laser beam has its maximum intensity. The central part of the RNA molecule is twisted into a single hairpin, and the ribosome binds to the RNA just to one side of this hairpin. As the ribosome reads the RNA molecule, it pulls the hairpin apart, and the resulting lengthening of the RNA strand is directly measured from the change in position of the anchoring bead in its optical trap. What’s seen is a series of steps – the ribosome moves about 2.7 nm in about a tenth of a second, then pauses for a couple of seconds before making another step.

This distance corresponds exactly to the size of the triplet of bases that represent a single character of the genetic code – the codon. What we are seeing, then, is the ribosome pausing on a codon to read it, before pulling the tape through to read the next character. What we don’t see in this experiment, though we know it’s happening, is the addition of a single amino acid to the growing protein chain during this read step. This takes place by means of the binding to RNA codon, within the ribosome, of a shorter strand of RNA – the transfer RNA – to which the amino acid is attached. What the experiment does make clear that the operation of this machine is by no means mechanical and regular. The times taken for the ribosome to move from the reading position for one codon to the next – the translocation times – are fairly tightly distributed around an average value of around 0.08 seconds, but the dwell times on each codon vary from a fraction of a second up to a few seconds. Occasionally the ribosome stops entirely for a few minutes.

This experiment is far from the final word on the way ribosomes operate. I can imagine, for example, that people are going to be making strenuous efforts to attach a probe directly to the ribosome, rather than, as was done here, inferring its motion from the location of the end of the RNA strand. But it’s fascinating to have such a direct probe of one of the most central operations of biology. And for those attempting the very ambitious task of creating a synthetic analogue of a ribosome, these insights will be invaluable.

Leading nanotechnologist gets top UK defense science job

It was announced yesterday that the new Chief Scientific Advisor to the UK’s Ministry of Defense is to be Professor Mark Welland. Mark Welland is currently Professor of Nanotechnology and the head of Cambridge University’s Nanoscience Centre. He is one of the pioneers of nanotechnology in the UK; he was, I believe, the first person in the country to build a scanning probe microscope. Most recently he has been in the news for his work with the mobile phone company Nokia, who recently unveiled their Morph concept phone at an exhibition at New York’s Museum of Modern Art, Design and the Elastic Mind.

How can nanotechnology help solve the world’s water problems?

The lack of availability of clean water to many of the world’s population currently leads to suffering and premature death for millions of people, and as population pressures increase, climate change starts to bite, and food supplies become tighter (perhaps exacerbated by an ill-considered move to biofuels) these problems will only intensify. It’s possible that nanotechnology may be able to contribute to solving these problems (see this earlier post, for example). A couple of weeks ago, Nature magazine ran a special issue on water, which included a very helpful review article: Science and technology for water purification in the coming decades. This article (which seems to be available without subscription) is all the more helpful for not focusing specifically on nanotechnology, instead making it clear where nanotechnology could fit into other existing technologies to create affordable and workable solutions.

One sometimes hears the criticism that there’s no point worrying about the promise of new nanotechnological solutions, when workable solutions are already known but aren’t being implemented, for political or economic reasons. That’s an argument that’s not without force, but the authors do begin to address it, by outlining what’s wrong with existing technical solutions. “These treatment methods are often chemically, energetically and operationally intensive, focused on large systems, and thus require considerable infusion of capital, engineering expertise and infrastructure” Thus we should be looking for decentralised solutions, that can be easily, reliably and cheaply installed using local expertise and preferably without the need for large scale industrial infrastructure.

To start with the problem of the sterilisation of water to kill pathogens, traditional methods start with chlorine. This isn’t ideal, as some pathogens are remarkably tolerant of it, and it can lead to toxic by-products. Ultra-violet sterilisation, on the other hand, offers a lot of promise – it’s good for bacteria, though less effective for viruses. But in combination with photocatalytic surfaces of titanium dioxide nanoparticles it could be very effective. Here what is required is either much cheaper sources of ultraviolet light, (which could come from new nanostructured semiconductor light emitting diodes) or new types of nanoparticles with surfaces excited by lower wavelength light, including sunlight.

Another problem is the removal of contamination by toxic chemicals, which can arise either naturally or through pollution. Problem contaminants include heavy metals, arsenic, pesticide residues, and endocrine disrupters; the difficulty is that these can have dangerous effects even at rather low concentrations, which can’t be detected without expensive laboratory-based analysis equipment. Here methods for robust, low cost chemical sensing would be very useful – perhaps a combination of molecular recognition elements integrated in nanofluidic devices could do the job.

The reuse of waste water offers hard problems because of the high content organic matter that needs to be removed, in addition to the removal of other contaminants. Membrane bioreactors combine the use of the sorts of microbes that are exploited in activated sludge processes of conventional sewage treatment with ultrafiltration through a membrane to get faster throughputs of waste water. The tighter the pores in this sort of membrane, the more effective it is at removing suspended material, but the problem is that this sort of membrane quickly gets blocked up. One solution is to line the micro- and nano- pores of the membranes with a single layer of hairy molecules – one of the paper’s co-authors, MIT’s Anne Mayes, developed a particularly elegant scheme for doing this exploiting self-assembly of comb-shaped copolymers.

Of course, most of the water in the world is salty (97.5%, to be precise), so the ultimate solution to water shortages is desalination. Desalination costs energy – necessarily so, as the second law of thermodynamics puts a lower limit on the cost of separating pure water from the higher entropy solution state. This theoretical limit is 0.7 kWh per cubic meter, and to date the most efficient practical process uses a not at all unreasonable 4 kWh per cubic meter. Achieving these figures, and pushing them down further, is a matter of membrane engineering, achieving precisely nanostructured pores that resist fouling and yet are mechanically and chemically robust.

A methanol economy?

Transport accounts for between a quarter and a third of primary energy use in developed economies, and currently this comes almost entirely from liquid hydrocarbon fuels. Anticipating a world with much more expensive oil and a need to dramatically reduce carbon dioxide emissions, many people have been promoting the idea of a hydrogen economy, in which hydrogen, generated in ways that minimise CO2 emissions, is used as a carrier of energy for transportation purposes. Despite its superficial attractiveness, and high profile political support, the hydrogen economy has many barriers to overcome before it becomes technically and economically feasible. Perhaps most pressing of these difficulties is the question of how this light, low energy density gas can be stored and transported. An entirely new pipeline infrastructure would be needed to move the hydrogen from the factories where it is made to filling stations, and, perhaps even more pressingly, new technologies for storing hydrogen in vehicles will need to be developed. Early hopes that nanotechnology would provide new and cost-effective solutions to these problems – for example, using carbon nanotubes to store hydrogen – don’t seem to be bearing fruit so far. Since using a gas as an energy carrier causes such problems, why don’t we stick with a flammable liquid? One very attractive candidate is methanol, whose benefits have been enthusiastically promoted by George Olah, a Nobel prize winning chemist from the University of Southern California, whose book Beyond Oil and Gas: The Methanol Economy describes his ideas in some technical detail.

The advantage of methanol as a fuel is that it is entirely compatible with the existing infrastructure for distributing and using gasoline; pipes, pumps and tanks would simply need some gaskets changed to switch over to the new fuel. Methanol is an excellent fuel for internal combustion engines; even the most hardened petrol-head should be convinced by the performance figures of a recently launched methanol powered Lotus Exige. However, in the future, greater fuel efficiency might be possible using direct methanol fuel cells if that technology can be improved.

Currently methanol is made from natural gas, but in principle it should be possible to make it economically by reacting carbon dioxide with hydrogen. Given a clean source of energy to make hydrogen (Olah is an evangelist for nuclear power, but if the scaling problems for solar energy were solved that would work too), one could recycle the carbon dioxide from fossil fuel power stations, in effective getting one more pass of energy out of it before releasing it into the atmosphere. Ultimately, it should be possible to extract carbon dioxide directly from the atmosphere, achieving in this way an almost completely carbon-neutral energy cycle. In addition to its use as a transportation fuel, it is also possible to use methanol as a feedstock for the petrochemical industry. In this way we could, in effect, convert atmospheric carbon dioxide into plastic.

To Canada

I’m off to Canada on Sunday, for a brief canter round Ontario. On Monday I’m in the MaRS centre in Toronto, where I’m speaking about nanotechnology in the UK as part of a meeting aimed at promoting UK-Canada collaboration in nanotechnology. On Tuesday I’m going to the University of Guelph, where I’m giving the Winegard lecture in Soft Matter Physics. On Wednesday and Thursday I’ll be at the University of Waterloo, visiting Jamie Forrest, and McMaster University, to congratulate Kari Dalnoki-Veress on winning the American Physical Society’s Dillon Medal. My thanks to Guelph’s John Dutcher for inviting me.

The right size for nanomedicine

One reason nanotechnology and medicine potentially make a good marriage is that the size of nano-objects is very much on the same length scale as the basic operations of cell biology; nanomedicine, therefore, has the potential to make direct interventions on living systems at the sub-cellular level. A paper in the current issue of Nature Nanotechnology (abstract, subscription required for full article) gives a very specific example, showing that the size of a drug-nanoparticle assembly directly affects how effective the drug works in controlling cell growth and death in tumour cells.

In this work, the authors bound a drug molecule to a nanoparticle, and looked at the way the size of the nanoparticle affected the interaction of the drug with receptors on the surface of target cells. The drug was herceptin, a protein molecule which binds to a receptor molecule called ErbB2 on the surface of cells from human breast cancer. Cancerous cells have too many of these receptors, and this affects the communications between different cells which tell cells whether to grow, or which marks cells for apoptosis – programmed cell death. What the authors found was that herceptin attached to gold nanoparticles was more effective than free herceptin at binding to the receptors; this then led to reduced growth rates for the treated tumour cells. But how well the effect works depends strongly on how big the nanoparticles are – best results are found for nanoparticles 40 or 50 nm in size, with 100 nm nanoparticles being barely more effective than the free drug.

What the authors think is going on is connected to the process of endocytosis, by which nanoscale particles can be engulfed by the cell membrane. Very small nanoparticles typically only have one herceptin molecule attached, so they behave much like free drug – one nanoparticle binds to one receptor. 50 nm nanoparticles have a number of herceptin molecules attached, so a single nanoparticle links together a number of receptors, and the entire complex, nanoparticles and receptors, is engulfed by the cell and taken out of the cell signalling process completely. 100 nm nanoparticles are too big to be engulfed, so only that fraction of the attached drug molecules in contact with the membrane can bind to receptors. A commentary (subscription required) by Mauro Ferrari sets this achievement in context, pointing out that a nanodrug needs to do four things: successfully navigate through the bloodstream, negotiate any biological barriers preventing it from getting it where it needs to go, locate the cell that is its target, and then to modify the pathological cellular processes that underly the disease being treated. We already know that nano-particle size is hugely important for the first three of these requirements, but this work directly connects size to the sub-cellular processes that are the target of nanomedicine.