A methanol economy?

Transport accounts for between a quarter and a third of primary energy use in developed economies, and currently this comes almost entirely from liquid hydrocarbon fuels. Anticipating a world with much more expensive oil and a need to dramatically reduce carbon dioxide emissions, many people have been promoting the idea of a hydrogen economy, in which hydrogen, generated in ways that minimise CO2 emissions, is used as a carrier of energy for transportation purposes. Despite its superficial attractiveness, and high profile political support, the hydrogen economy has many barriers to overcome before it becomes technically and economically feasible. Perhaps most pressing of these difficulties is the question of how this light, low energy density gas can be stored and transported. An entirely new pipeline infrastructure would be needed to move the hydrogen from the factories where it is made to filling stations, and, perhaps even more pressingly, new technologies for storing hydrogen in vehicles will need to be developed. Early hopes that nanotechnology would provide new and cost-effective solutions to these problems – for example, using carbon nanotubes to store hydrogen – don’t seem to be bearing fruit so far. Since using a gas as an energy carrier causes such problems, why don’t we stick with a flammable liquid? One very attractive candidate is methanol, whose benefits have been enthusiastically promoted by George Olah, a Nobel prize winning chemist from the University of Southern California, whose book Beyond Oil and Gas: The Methanol Economy describes his ideas in some technical detail.

The advantage of methanol as a fuel is that it is entirely compatible with the existing infrastructure for distributing and using gasoline; pipes, pumps and tanks would simply need some gaskets changed to switch over to the new fuel. Methanol is an excellent fuel for internal combustion engines; even the most hardened petrol-head should be convinced by the performance figures of a recently launched methanol powered Lotus Exige. However, in the future, greater fuel efficiency might be possible using direct methanol fuel cells if that technology can be improved.

Currently methanol is made from natural gas, but in principle it should be possible to make it economically by reacting carbon dioxide with hydrogen. Given a clean source of energy to make hydrogen (Olah is an evangelist for nuclear power, but if the scaling problems for solar energy were solved that would work too), one could recycle the carbon dioxide from fossil fuel power stations, in effective getting one more pass of energy out of it before releasing it into the atmosphere. Ultimately, it should be possible to extract carbon dioxide directly from the atmosphere, achieving in this way an almost completely carbon-neutral energy cycle. In addition to its use as a transportation fuel, it is also possible to use methanol as a feedstock for the petrochemical industry. In this way we could, in effect, convert atmospheric carbon dioxide into plastic.

To Canada

I’m off to Canada on Sunday, for a brief canter round Ontario. On Monday I’m in the MaRS centre in Toronto, where I’m speaking about nanotechnology in the UK as part of a meeting aimed at promoting UK-Canada collaboration in nanotechnology. On Tuesday I’m going to the University of Guelph, where I’m giving the Winegard lecture in Soft Matter Physics. On Wednesday and Thursday I’ll be at the University of Waterloo, visiting Jamie Forrest, and McMaster University, to congratulate Kari Dalnoki-Veress on winning the American Physical Society’s Dillon Medal. My thanks to Guelph’s John Dutcher for inviting me.

The right size for nanomedicine

One reason nanotechnology and medicine potentially make a good marriage is that the size of nano-objects is very much on the same length scale as the basic operations of cell biology; nanomedicine, therefore, has the potential to make direct interventions on living systems at the sub-cellular level. A paper in the current issue of Nature Nanotechnology (abstract, subscription required for full article) gives a very specific example, showing that the size of a drug-nanoparticle assembly directly affects how effective the drug works in controlling cell growth and death in tumour cells.

In this work, the authors bound a drug molecule to a nanoparticle, and looked at the way the size of the nanoparticle affected the interaction of the drug with receptors on the surface of target cells. The drug was herceptin, a protein molecule which binds to a receptor molecule called ErbB2 on the surface of cells from human breast cancer. Cancerous cells have too many of these receptors, and this affects the communications between different cells which tell cells whether to grow, or which marks cells for apoptosis – programmed cell death. What the authors found was that herceptin attached to gold nanoparticles was more effective than free herceptin at binding to the receptors; this then led to reduced growth rates for the treated tumour cells. But how well the effect works depends strongly on how big the nanoparticles are – best results are found for nanoparticles 40 or 50 nm in size, with 100 nm nanoparticles being barely more effective than the free drug.

What the authors think is going on is connected to the process of endocytosis, by which nanoscale particles can be engulfed by the cell membrane. Very small nanoparticles typically only have one herceptin molecule attached, so they behave much like free drug – one nanoparticle binds to one receptor. 50 nm nanoparticles have a number of herceptin molecules attached, so a single nanoparticle links together a number of receptors, and the entire complex, nanoparticles and receptors, is engulfed by the cell and taken out of the cell signalling process completely. 100 nm nanoparticles are too big to be engulfed, so only that fraction of the attached drug molecules in contact with the membrane can bind to receptors. A commentary (subscription required) by Mauro Ferrari sets this achievement in context, pointing out that a nanodrug needs to do four things: successfully navigate through the bloodstream, negotiate any biological barriers preventing it from getting it where it needs to go, locate the cell that is its target, and then to modify the pathological cellular processes that underly the disease being treated. We already know that nano-particle size is hugely important for the first three of these requirements, but this work directly connects size to the sub-cellular processes that are the target of nanomedicine.

A culture of improvement

If one wants to comment on the future of technology, it’s a good idea to have some understanding of its history. A new book by Robert Friedel,
A Culture of Improvement: Technology and the Western Millennium, takes on the ambitious task of telling the story of the development of technology in Europe and North America over the last thousand years.

The book is largely a very readable narrative history of technology, with some rather understated broader arguments. One theme is suggested by the title; in Friedel’s view the advance of technology has been driven, not so much by the spectacular advances of the great inventors, but by a mindset that continually seeks to make incremental improvements in existing technologies. The famous inventors, the James Watts and Alexander Graham Bells of history, certainly get due space, but there’s also an emphasis on placing the best-known inventions in the context of the less well known precursor technologies from which they sprung, and on the way engineers and workers continuously improved the technologies once they were introduced. Another theme is the way in which the culture of improvement was locked into place, as it were, by the institutions that promoted technical and scientific education, and the media that brought new scientific and technical ideas to a wide audience.

This provokes some revision of commonly held ideas about the relationship between science and engineering. In Friedel’s picture, the role of science has been, less to provide fundamental discoveries that engineers can convert into practical devices, and more to provide the mental framework that permits the process of incremental improvement. Those who wish to de-emphasise the importance of science for innovation often point to the example of the development of the steam engine – “thermodynamics owes much more to the steam engine than the steam engine owes to thermodynamics”, the saying goes. This of course is true as far as it goes – the academic subject of thermodynamics was founded by Sadi Carnot’s analysis of the steam engines that were already in widespread use, and which had been extensively developed without the benefit of much theoretical knowledge. But it neglects the degree to which an understanding of formal thermodynamics underlay the development of the more sophisticated types of engines that are still in use today. Rudolph Diesel’s efforts to develop the engine that bears his name, and which is now so important, were based on an explicit project to use the thermodynamics he had learned from his professor, Carl Linde (who also made huge contributions to the technology of refrigeration), to design the most efficient possible internal combustion engine.

Some aspects of the book are open to question. The focus on Europe, and the European offshoots in North America, is justified by the premise that there was something special in this culture that led to the “culture of improvement”; one could argue, though, that the period of unquestioned European technological advantage was a relatively short fraction of the millennium under study (it’s arguable, for example, that China’s medieval technological lead over Europe persisted well into the 18th century). And many will wonder whether technological advances always lead to “improvement”. A chapter on “the corruption of improvement” discusses the application of technology to weapons of mass destruction, but one feels that Friedel’s greatest revulsion is prompted by the outcome of the project to apply the culture of improvement to the human race itself. It’s useful to be reminded that the outcome of this earlier project for “human enhancement” was, particularly in the USA and Scandinavia, a programme of forced sterilisation of those deemed unfit to reproduce that persisted well into living memory. In Germany, of course, this “human enhancement” project moved beyond sterilisation to industrial-scale systematic murder of the disabled and those who were believed to be threats to “racial purity”.