Are electric cars the solution?

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

How cells decide

One of the most important recent conceptual advances in biology, in my opinion, is the realization that much of the business carried out by the nanoscale machinery of the cell is as much about processing information as processing matter. Dennis Bray pointed out, in an important review article (8.4 MB PDF) published in Nature in 1995, that mechanisms such as allostery, by which the catalytic activity of an enzyme can be switched on and off by the binding of another molecule, mean that proteins can form the components of logic gates, which themselves can be linked together to form biochemical circuits. These information processing networks can take information about the environment from sensors at the cell surface, compute an appropriate action, and modify the cell’s behaviour in response. My eye was recently caught by a paper from 2008 which illustrates rather nicely how it is that the information processing capacity of a single cell can be quite significant.

The paper – Emergent decision-making in biological signal transduction networks (abstract, subscription required for full article in PNAS), comes from Tomáš Helikar, John Konvalina, Jack Heidel, and Jim A. Rogers at the University of Nebraska. What these authors have done is construct a large scale, realistic model of a cell signalling network in a generic eukaryotic cell. To do this, they’ve mined the literature for data on 130 different network nodes. Each node represents a protein; in a crucial simplification they reduce the complexities of the biochemistry to simple Boolean logic – the node is either on or off, depending on whether the protein is active or not, and for each node there is a truth table expressing the interactions of that node with other proteins. For some more complicated cases, a single protein may be represented by more than one node, expressing the fact that there may be a number of different modified states.

This model of the cell takes in information from the outside world; sensors at the cell membrane measure the external concentration of growth factors, extracellular matrix proteins, and calcium levels. This is the input to the cell’s information processing system. The outputs of the systems are essentially decisions by the cell about what to do in response to its environment. The key result of the simulations is that the network can take a wide variety of input signals, often including random noise, and for each combination of inputs produce one of a small number of biologically appropriate responses – as the authors write, “this nonfuzzy partitioning of a space of random, noisy, chaotic inputs into a small number of equivalence classes is a hallmark of a pattern recognition machine and is strong evidence that signal transduction networks are decision-making systems that process information obtained at the membrane rather than simply passing unmodified signals downstream.”