Can carbon capture and storage work?

Across the world, governments are placing high hopes on carbon capture and storage as the technology that will allow us to go on meeting a large proportion of the world’s growing energy needs from high carbon fossil fuels like coal. The basic technology is straightforward enough; in one variant one burns the coal as normal, and then takes the flue gases through a process to separate the carbon dioxide, which one then pipes off and shuts away in a geological reservoir, for example down an exhausted natural gas field. There are two alternatives to this simplest scheme; one can separate the oxygen from the nitrogen in the air and then burn the fuel in pure oxygen, producing nearly pure carbon dioxide for immediate disposal. Or in a process reminiscent of that used a century ago to make town gas, one can gasify coal to produce a mixture of carbon dioxide and hydrogen, remove the carbon dioxide from the mixture and burn the hydrogen. Although the technology for this all sounds straightforward enough, a rather sceptical article in last week’s Economist, Trouble in Store, points out some difficulties. The embarrassing fact is that, for all the enthusiasm from politicians, no energy utility in the world has yet built a large power plant using carbon capture and storage. The problem is purely one of cost. The extra capital cost of the plant is high, and significant amounts of energy need to be diverted to do the necessary separation processes. This puts a high (and uncertain) price on each tonne of carbon not emitted.

Can technology bring this cost down? This question was considered in a talk last week by Professor Mercedes Maroto-Valer from the University of Nottingham’s Centre for Innovation in Carbon Capture and Storage. The occasion for the talk was a meeting held last Friday to discuss environmentally beneficial applications of nanotechnology; this formed part of the consultation process about the third Grand Challenge to be funded in nanotechnology by the UK’s research council. A good primer on the basics of the process can be found in the IPCC special report on carbon capture. At the heart of any carbon capture method is always a gas separation process. This might be helped by better nanotechnology-enabled membranes, or nanoporous materials (like molecular sieve materials) that can selectively absorb and release carbon dioxide. These would need to be cheap and capable of sustaining many regeneration cycles.

This kind of technology might help by bringing the cost of carbon capture and storage down from its current rather frightening levels. I can’t help feeling, though, that carbon capture and storage will always remain a rather unsatisfactory technology for as long as its costs remain a pure overhead – thus finding something useful to do with the carbon dioxide is a hugely important step. This is another reason why I think the “methanol economy” deserves serious attention. The idea here is to use methanol as an energy carrier, for example as a transport fuel which is compatible with existing fuel distribution infrastructures and the huge installed base of internal combustion engines. A long-term goal would be to remove carbon dioxide from the atmosphere and use solar energy to convert it into methanol for use as a completely carbon-neutral transport fuel and as a feedstock for the petrochemical industry. The major research challenge here is to develop scalable systems for the photocatalytic reduction of carbon dioxide, or alternatively to do this in a biologically based system. Intermediate steps to a methanol economy might use renewably generated electricity to provide the energy for the creation of methanol from water and carbon dioxide from coal-fired power stations, extracting “one more pass” of energy from the carbon before it is released into the atmosphere. Alternatively process heat from a new generation nuclear power station could be used to generate hydrogen for the synthesis of methanol from carbon dioxide captured from a neighboring fossil fuel plant.

Natural complexity, engineering simplicity

One of the things that makes mass production possible is the large-scale integration of nearly identical parts. Much engineering design is based on this principle, which is taken to extremes in microelectronics; a modern microprocessor will contain several hundred million transistors, every one of which needs to be manufactured to very high tolerances if the device is to work at all. One might think that similar considerations would apply to biology. After all, the key components of biological nanotechnology – the proteins that are the key components of most of the nanoscale machinery of the cell – are specified by the genetic code down to the last atom, and in many cases are folded in a unique three dimensional configuration. It turns out, though, that this is not the case; biology actually has sophisticated mechanisms whose entire purpose is to introduce extra variation into its components.

This point was forcefully made by Dennis Bray in an article in Science magazine in 2003: called Molecular Prodigality (PDF version from Bray’s own website). Protein sequences can be chopped and changed, after the DNA code has been read, by processes of RNA editing and splicing and other types of post-translational modification, and these can lead to distinct changes in the operation of machines made from these proteins. Bray cites as an example the potassium channels in squid nerve axons; one of the component proteins can be altered by RNA editing in up to 13 distinct places, changing the channel’s operating parameters. He calculates that the random combination of all these possibilities means that there are 4.5 ×1015 subtly different possible types of potassium channels. This isn’t an isolated example; Bray estimates that up to a half of human structural genes allow some such variation, with the brain and nervous system being particularly rich in molecular diversity.

It isn’t at all clear what all this variation is for, if anything. One can speculate that some of this variability has evolved to increase the adaptability of organisms to unpredictable changes in environmental conditions. This is certainly true for the case of the adaptive immune system. A human has the ability to make 1012 different types of antibody, using combinatorial mechanisms to generate a huge library of different molecules, each of which has the potential to recognise characteristic target molecules on pathogens that we’ve yet to be exposed to. This is an example of biology’s inherent complexity; human engineering, in contrast, strives for simplicity.