Judgement day for UK nanotechnology policy

There’s a certain amount of anxious anticipation in UK nanotechnology policy circles, as tomorrow sees the publication of the results of a high-level, independent review of the government’s response to the 2004 Royal Society report on nanotechnology – Nanoscience and nanotechnologies: opportunities and uncertainties.

The report was prepared by the Council for Science and Technology, the government’s highest level science advisory committee, which reports directly to the Prime Minister. I wrote earlier about the CST seminar held last autumn to gather evidence, and about the Royal Society’s suprisingly forthright submission to the inquiry. We shall see tomorrow how much of that criticism was taken on board by the CST, and how the Science Minister, Malcolm Wicks, responds to it.

Science Horizons

One of the problems of events which aim to gauge the views of the public about emerging issues like nanotechnology is that it isn’t always easy to provide information in the right format, or to account for the fact that lots of publicly available information may be contested and controversial in ways that are difficult to appreciate unless one is deeply immersed in the subject. It’s also very difficult for anybody – lay person or expert – to be able to judge what impact any particular development in science or technology might actually have on everyday life. Science Horizons is a public engagement project that’s trying to deal with this problem. The project is funded by the UK government; its aim is to start a public discussion about the possible impacts of future technological changes by providing a series of stories about possible futures which do focus on everyday dilemmas that people may face.

The stories, which are available in interactive form on the Science Horizons website, focus on issues like human enhancement, privacy in a world with universal surveillance, and problems of energy supply. These, of course, will be very familiar to most readers of this blog. The scenarios are very simple, but they draw on the large amount of work that’s been done for the UK government recently by its new Horizon Scanning Centre, which reports to the Government’s Chief Scientist, Sir David King. This centre published its first outputs earlier this year; the Sigma Scan concentrating on broader social, economic, environmental and political trends, and a Delta Scan concentrating on likely developments in science and technology.

The idea is that the results of the public engagement work based on the Science Horizons material will inform the work of the Horizon Scanning centre as it advises government about the policy implications of these developments.

Brain chips

There can be few more potent ideas in futurology and science fiction than that of the brain chip – a direct interface between the biological information processing systems of the brain and nervous system and the artificial information processing systems of microprocessors and silicon electronics. It’s an idea that underlies science fiction notions of “jacking in” to cyberspace, or uploading ones brain, but it also provides hope to the severely disabled that lost functions and senses might be restored. It’s one of the central notions in the idea of human enhancement. Perhaps through a brain chip one might increase ones cognitive power in some way, or have direct access to massive banks of data. Because of the potency of the idea, even the crudest scientific developments tend to be reported in the most breathless terms. Stripping away some of the wishful thinking, what are the real prospects for this kind of technology?

The basic operations of the nervous system are pretty well understood, even if the way the complexities of higher level information processing work remain obscure, and the problem of consciousness is a truly deep mystery. The basic units of the nervous system are the highly specialised, excitable cells called neurons. Information is carried long distances by the propagation of pulses of voltage along long extensions of the cell called axons, and transferred between different neurons at junctions called synapses. Although the pulses carrying information are electrical in character, they are very different from the electrical signals carried in wires or through semiconductor devices. They arise from the fact that the contents of the cell are kept out of equilibrium with their surroundings by pumps which selectively transport charged ions across the cell membrane, resulting in a voltage across the membrane. This voltage can be relaxed when channels in the membrane, which are triggered by changes in voltage, open up. The information carrying impulse is actually a shock wave of reduced membrane potential, enabled by transport of ions through the membrane.

To find out what is going on inside a neuron, one needs to be able to measure the electrochemical potential across the membrane. Classically, this is done by inserting an electrochemical electrode into the interior of the nerve cell. The original work, carried out by Hodgkin, Huxley and oters in the 50’s, used squid neurons, because they are particularly large and easy to handle. So, in principle one could get a readout of the state of a human brain by measuring the potential at a representative series of points in each of its neurons. The problem, of course, is that there are a phenomenal number of neurons to be studied – around 20 billion in a human brain. Current technology has managed to miniaturise electrodes and pack them in quite dense arrays, allowing the simultaneous study of many neurons. A recent paper (Custom-designed high-density conformal planar multielectrode arrays for brain slice electrophysiology, PDF)) from Ted Berger’s group at the University of Southern California shows a good example of the state of the art – this has electrodes with 28 µm diameter, separated by 50 µm, in an array of 64 electrodes. These electrodes can both read the state of the neuron, and stimulate it. This kind of electrode array forms the basis of brain interfaces that are close to clinical trials – for example the BrainGate product.

In a rather different class from these direct, but invasive probes of nervous system activity at the single neutron level, there are some powerful, but indirect measures of brain activity, such as functional magnetic resonance imaging or positron emission tomography. These don’t directly measure the electrical activity of neurons, either individually or in groups; instead they rely on the fact that thinking is hard work (literally) and locally raises the rate of metabolism. Functional MRI and PET allow one to localise nervous activity to within a few cubic millimeters, which is hugely revealing in terms of identifying which parts of the brain are involved in which kind of mental activity, but which remains a long way away from the goal of unpicking the brain’s activity at the level of neurons.

There is another approach does probe activity at the single neuron level, but doesn’t feature the invasive procedure of inserting an electrode into the nerve itself. These are the neuron-silicon transistors developed in particular by Peter Fromherz at the Max Planck Institute for Biochemistry. These really are nerve chips, in that there is a direct interface between neurons and silicon microelectronics of the sort that can be highly miniaturised and integrated. On the other hand, these methods are currently restricted to operate in two dimensions, and require careful control of the growing medium that seems to rule out, or at least present big problems for, in-vivo use.

The central ingredient of this approach is a field effect transistor which is gated by the excitation of a nerve cell in contact with it (i.e., the current passed between the source and drain contacts of the transistor strongly depends on the voltage state of the membrane in proximity to the insulating gate dielectric layer). This provides a read-out of the state of a neuron; input to the neurons can also be made by capacitors, which can be made on the same chip. The basic idea was established 10 years ago – see for example Two-Way Silicon-Neuron Interface by Electrical Induction. The strength of this approach is that it is entirely compatible with the powerful methods of miniaturisation and integration of CMOS planar electronics. In more recent work, an individual mammalian cell can be probed “Signal Transmission from Individual Mammalian Nerve Cell to Field-Effect Transistor” (Small, 1 p 206 (2004), subscription required), and an integrated circuit with 16384 probes, capable of probing a neural network with a resolution of 7.8 µm has been built “Electrical imaging of neuronal activity by multi-transistor-array (MTA) recording at 7.8 µm resolution” (abstract, subscription required for full article).

Fromherz’s group have demonstrated two types of hybrid silicon/neuron circuits (see, for example, this review “Electrical Interfacing of Nerve Cells and Semiconductor Chips”, abstract, subscription required for full article). One circuit is a prototype for a neural prosthesis – an input from a neuron is read by the silicon electronics, which does some information processing and then outputs a signal to another neuron. Another, inverse, circuit is a prototype of a neural memory on a chip. Here there’s an input from silicon to a neuron, which is connected to another neuron by a synapse. This second neuron makes its output to silicon. This allows one to use the basic mechanism of neural memory – the fact that the strength of the connection at the synapse can be modified by the type of signals it has transmitted in the past – in conjunction with silicon electronics.

This is all very exciting, but Fromherz cautiously writes: “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.” Among the practical problems are the fact that it seems difficult to extend the method into in-vivo applications, it is restricted to two dimensions, and the spatial resolution is still quite large.

Pushing down to smaller sizes is, of course, the province of nanotechnology, and there are a couple of interesting and suggestive recent papers which suggest directions that this might go in the future.

Charles Lieber at Harvard has taken the basic idea of the neuron gated field effect transistor, and executed it using FETs made from silicon nanowires. A paper published last year in Science – Detection, Stimulation, and Inhibition of Neuronal Signals with High-Density Nanowire Transistor Arrays (abstract, subscription needed for full article) – demonstrated that this method permits the excitation and detection of signals from a single neuron with a resolution of 20 nm. This is enough to follow the progress of a nerve impulse along an axon. This gives a picture of what’s going on inside a living neuron with unprecendented resolution. But it’s still restricted to systems in two dimensions, and it only works when one has cultured the neurons one is studying.

Is there any prospect, then, of mapping out in a non-invasive way the activity of a living brain at the level of single neurons? This still looks a long way off. A paper from the group of Rodolfo Llinas at the NYU School of Medicine makes an ambitious proposal. The paper – Neuro-vascular central nervous recording/stimulating system: Using nanotechnology probes (Journal of Nanoparticle Research (2005) 7: 111–127, subscription only) – points out that if one could detect neural activity using probes within the capillaries that supply oxygen and nutrients to the brain’s neurons, one would be able to reach right into the brain with minimal disturbance. They have demonstrated the principle in-vitro using a 0.6 µm platinum electrode inserted into one of the capillaries supplying the neurons in the spinal cord. Their proposal is to further miniaturise the probe using 200 nm diameter polymer nanowires, and they further suggest making the probe steerable using electrically stimulated shape changes – “We are developing a steerable form of the conducting polymer nanowires. This would allow us to steer the nanowire-probe selectively into desired blood vessels, thus creating the first true steerable nano-endoscope.” Of course, even one steerable nano-endoscope is still a long way from sampling a significant fraction of the 25 km of capillaries that service the brain.

So, in some senses the brain chip is already with us. But there’s a continuum of complexity and sophisitication of such devices, and we’re still a long way from the science fiction vision of brain downloading. In the sense of creating an interface between the brain and the world, that is clearly possible now and has in some form been realised. Hybrid structures which combine the information processing capabilities of silicon electronics and nerve cells cultured outside the body are very close. But a full, two-way integration of the brain and artificial information processing systems remains a long way off.

Keeping on keeping on

There are some interesting reflections on the recent Ideas Factory Software control of matter from the German journalist Neils Boeing, in a piece called Nano-Elvis vs Nano-Beatles. He draws attention to the irony that a research program with such a Drexlerian feel had as its midwife someone like me, who has been such a vocal critic of Drexlerian ideas. The title comes from an analogy which I find very flattering, if not entirely convincing – roughly translated from the German, he says: “It’s intringuingly reminiscent of the history of pop music, which developed by a transatlantic exchange. The American Elvis began things, but it was the British Beatles who really got the epochal phenomenon rolling. The solo artist Drexler launched his vision on the world, but in practise the crucial developments could made by a British big band of researchers. We have just one wish for the Brits – keep on rocking!” Would that it were so.

In other media, there’s an article by me in the launch issue of the new nanotechnology magazine from the UK’s Insititute of Nanotechnology – NanoNow! (PDF, freely downloadable). My article has the strap-line “Only Skin Deep – Cosmetics companies are using nano-products to tart up their face creams and sun lotions. But are they safe? Richard A.L. Jones unmasks the truth.” I certainly wouldn’t claim to unmask the truth about controversial issues like the use of C60 in face-creams, but I hope I managed to shed a little light on a very murky and much discussed subject.

My column in Nature Nanotechnology this month is called “Can nanotechnology ever prove that it is green?” This is only available to subscribers. As Samuel Johnson wrote, “No man but a blockhead ever wrote, except for money.” I don’t think he would have approved of blogs.

Do naturally formed nanoparticles make ball lightning?

Ball lightning is an odd and obscure phenomenon; reports describe glowing globes the size of footballs, which float along at walking speed, sometimes entering buildings, and whose existence sometimes comes to an end with a small explosion. Observations are generally associated with thunderstorms. I’ve never seen ball lightning myself, though when I was a physics undergraduate at Cambridge in 1982 there was a famous sighting in the Cavendish Laboratory itself. This rather elusive phenomenon has generated a huge range of potential explanations, ranging from the exotic (anti-matter meteorites, tiny black holes) to the frankly occult. But there seems to be growing evidence that ball lightning may in fact be the manifestation of slowly combusting, loose aggregates of nanoparticles formed by the contact of lightning bolts with the ground.

The idea that ball lightning consists of very low density aggregates of finely divided material originates with a group of Russian scientists. A pair of scientists from New Zealand, Abrahamson and Dinnis, showed some fairly convincing electron micrographs of chains of nanoparticles produced by the contact of electrical discharges with the soil, as reported in this 2000 Nature paper (subscription required for full paper). Abrahamson’s theory is also described in this news report from 2002, while a whole special issue of the Royal Society’s journal Philosophical Transactions from that year puts the Abrahamson theory in context with the earlier Russian work and the observational record. The story is brought up to date with some very suggestive looking experimental results reported a couple of weeks ago in the journal Physical Review Letters, in a letter entitled Production of Ball-Lightning-Like Luminous Balls by Electrical Discharges in Silicon (subscription required for full article), by a group from the Universidade Federal de Pernambuco in Brazil. In their very simple experiment, an electric arc was made with a silicon wafer, in ambient conditions. This produced luminous balls, from 1- 4 cm in diameter, which moved erratically along the ground, sometimes squeezing through gaps, and disappeared after 2 – 5 seconds leaving no apparent trace. Their explanation is that the discharge created silicon nanoparticles which aggregated to form a very open, low density aggregate, and subsequently oxidised to produce the heat that made the balls glow.

The properties of nanoparticles which make this explanation at least plausible are fairly familiar. They have a very high surface area, and so are substantially more reactive than their parent bulk materials. They can aggregate into very loose, fractal, structures whose effective density can be very low (not much greater, it seems in this case, than air itself). And they can be made a variety of physical processes, some of which are to be found in nature.

Al Gore’s global warming roadshow

Al Gore visited Sheffield University yesterday, so I joined the growing number of people round the world who have seen his famous Powerpoint presentation on global warming (to be accurate, he did it in Keynote, being a loyal Apple board member). As a presentation it was undoubtedly powerful, slick, sometimes moving, and often very funny. His comic timing has clearly got a lot better since he was a Presidential candidate, even though some of his jokes didn’t cross the Atlantic very effectively. However, it has to be said that they worked better than the efforts of Senator George Mitchell, who introduced him. It is possible that Gore’s rhetorical prowess was even further heightened by the other speakers who preceded him; these included a couple of home-grown politicians, a regional government official and a lawyer, none of whom were exactly riveting. But, it’s nonetheless an interesting signal that this event attracted an audience of this calibre, including one government minister and an unannounced appearance by the Deputy Prime Minister.

Since a plurality of the readers of this blog are from the USA, I need to explain that this is one way in which the politics of our two countries fundamentally differ. None of the major political parties doubts the reality of anthropogenic climate change, and indeed there is currently a bit of an auction between them about who takes it most seriously. The ruling Labour Party commissioned a distinguished economist to write the Stern Report, a detailed assessment of the potential economic costs of climate change and of the cost-effectiveness of taking measures to combat it, and gave Al Gore an official position as an advisor on the subject. Gore’s UK apotheosis has been made complete by the announcement that the government is to issue all schools with a copy of his DVD “An Inconvenient Truth”. This announcement was made, in response to the issue of the latest IPCC summary for policy makers (PDF), by David Miliband, the young and undoubtedly very clever environment minister, who is often spoken of as being destined for great things in the future, and has been recently floating some very radical, even brave, notions about personal carbon allowances. The Conservatives, meanwhile, have demonstrated their commitment to alternative energy by their telegenic young leader David Cameron sticking a wind-turbine on top of his Notting Hill house. It’s gesture politics, of course, but an interesting sign of the times. The minority third party, the Liberal Democrats, believe they invented this issue long ago.

What does this mean for the policy environment, particularly as it affects science policy? The government’s Chief Scientific Advisor, Sir David King, has long been a vocal proponent of the need for urgent action on energy and climate. Famously, he went to the USA a couple of years ago to announce that climate change was a bigger threat than terrorism, to the poorly concealed horror of a flock of diplomats and civil servants. But (oddly, one might think), Sir David doesn’t actually directly control the science budget, so it isn’t quite the case that the entire £3.4 billion (i.e., nearly $7 billion) will be redirected to a combination of renewables research and nuclear (which Sir David is also vocally in favour of). Nonetheless, one does get the impression that a wall of money is just about to be thrown at energy research in general, to the extent that it isn’t entirely obvious that the capacity is there to do the research.

Nanotechnology discussion on the American Chemical Society website

I am currently participating in a (ahem…) “blogversation” about nanotechnology on the website run by the publications division of the American Chemical Society. There’s an introduction to the event here, and you can read the first entry here; the conversation has got started around those hoary issues of nanoparticle toxicity and nanohype. Contributors, besides me, include David Berube, Janet Stemwedel, Ted Sargent, and Rudy Baum, Editor in Chief of Chemical and Engineering News.

Playing God

I went to the Avignon nanoethics conference with every intention of giving a blow-by-blow account of the meeting as it happened, but in the end it was so rich and interesting that it took all my attention to listen and contribute. Having got back, it’s the usual rush to finish everything before the holidays. So here’s just one, rather striking, vignette from the meeting.

The issue that always bubbles below the surface when one talks about self-assembly and self-organisation is whether we will be able to make something that could be described as artificial life. In the self-assembly session, this was made very explicit by Mark Bedau, the co-founder of the European Center for Living Technology and participant in the EU funded project PACE (Programmable Artificial Cell Evolution), whose aim is to make an entirely synthetic system that shares some of the fundamental characteristics of living organisms (e.g. metabolism, reproduction and evolution). The Harvard chemist George Whitesides, (who was sounding more and more the world-weary patrician New Englander) described the chances of this programme being successful as being precisely zero.

I sided with Bedau on this, but what was more surprising to me was the reaction of the philosophers and ethicists to this pessimistic conclusion. Jean-Pierre Dupuy, a philosopher who has expressed profound alarm at the implications of loss of control implied by the idea of exploiting self-organising systems in technology, said that, despite all his worries, he would be deeply disappointed if this conclusion was true. A number of people commented on the obvious fear that people would express that making synthetic life would be tantamount to “playing God”. One speaker talked about the Jewish traditions connected with the Golem to insist that in that tradition the aspiration to make life was by itself not necessarily wrong. And, perhaps even more surprisingly, the bioethicist William Hurlbut, a member of the (US) President’s Council on Bioethics and a prominent Christian bioconservative, also didn’t take a very strong position on the ethics of attempting to make something with the qualities of life. Of course, as we were reminded by the philosopher and historian of science Bernadette Bensaude-Vincent, there have been plenty of times in the past when scientists have proclaimed that they were on the verge of creating life, only for this claim to turn out to be very premature.

Nanoethics conference at Avignon

I’m en-route to the South of France, on my way to Avignon, where, under the auspices of a collaboration between the University of Paris and Stanford University, there’s a conference on the “Ethical and Societal Implications of the Nano-Bio-Info-Cogno Convergence”. The aim of the conference is to “explore issues emerging in the application of nanotechnology, biotechnology, information technology, and cognitive science to the spheres of social, economic, and private life, as well as a contribution of ethical concerns to shaping the technological development.” One of the issues that has clearly captured the imagination of a number of the contributors from a more philosophical point of view is the idea of self-assembly, and particularly the implications this has for the degree of control, or otherwise, that we, as technologists, will have over our productions. The notion of a “soft machine” appeals to some observers’ sense of paradox, and opens up a discussion the connections between the Cartesian idea of a machine, our changing notions of how biological organisms work, and competing ideas of how best to do engineering on the nanoscale. There’s a session devoted to self-assembly, introduced by the philosopher Bernadette Bensaude-Vincent; among the people responding will be me and the Harvard chemist George Whitesides.

The commenters on the last item will be pleased to hear that, rather than flying to Avignon, I’m travelling in comfort on France’s splendidly fast (and, ultimately, nuclear powered) trains.

Driving on sunshine

Can the fossil fuels we use in internal combustion engines be practicably replaced by fuels derived from plant materials – biofuels? This question has, in these times of high oil prices and climate change worries, risen quickly up the agenda. Plants use the sun’s energy to convert carbon dioxide into chemically stored energy in the form of sugar, starch, vegetable oil or cellulose, so if one can economically convert these molecules into convenient fuels like ethanol, one has a route for the sustainable production of fuels for transportation. The sense of excitement and timeliness has even reached academia; my friends in Cambridge University and Imperial College are, as I write, frantically finalising their rival pitches to the oil giant BP, which is planning to spend $500 million on biofuels research over the next 10 years. Today’s issue of Nature has some helpful features (here, this claims to be free access but it doesn’t work for me without a subscription) overviewing the pros and cons.

The advantages of biofuels are obvious. They exploit the energy of the sun, the only renewable and carbon-neutral energy source available, in principle, in sufficient quantities to power our energy-intensive way of life on a worldwide basis. Unlike alternative methods of harnessing the sun’s energy, such as using photovoltaics to generate electricity or to make hydrogen, biofuels are completely compatible with our current transportation infrastructure. Cars and trucks will run on them with little modification, and existing networks of tankers, storage facilities and petrol stations can be used unaltered. It’s easy to see their attractions to those oil companies which, like BP and Shell, have seen that they are going to have to change their ways if they are going to stay in business.

Up to now, I’ve been somewhat sceptical. Plants are, by the standards of photovoltaic cells, very inefficient at converting sunlight into energy; they require inputs of water and fertilizer, and need to be converted into usable biofuels by energy intensive processes. The world has plenty of land, but the fraction of it available for agriculture is not large, and while this is probably sufficient to provide enough food for the world’s population the margin is not very comfortable, and is likely to get less so as climate change intensifies. One of the highest profile examples of large scale biofuel production is provided by the US program to make ethanol from corn, which is only kept afloat by huge subsidies and high protective tariff barriers. In energetic terms, it isn’t even completely clear that the corn-alcohol process produces more energy than it consumes (even advocates of the program claim only that it produces a two-fold return on energy input).

The Nature article does make clear, though, that there is a much more positive example of a biofuel program, in ethanol produced from Brazilian sugar-cane. Estimates are that it produces an eightfold return on the energy input, and it’s clear that this product, at around 27 cents a litre, is economic at current oil prices. The environmental costs of farming the stuff seem, if not negligible, less extreme than, for example, the destruction of rain-forest for palm oil plantations to produce biodiesel. The problem, as always, is scaling-up, finding enough suitable land to make a dent on the world’s huge thirst for transport fuels. Brazil is a big country, but even optimists only predict a doubling of output in the near future, which would still leave it accounting for less than one percent of the world’s demand for petrol.

Can there be a technical fix for these problems? This, of course, is the hope behind BP’s investment in research. One key advance would be to find more economical ways of breaking down the tough molecules that make up the woody matter of many plants, cellulose and lignin, into their component sugars, and then into alcohol. This brings the prospect of being able to use, not only agricultural waste like corn husks and wheat straw, but new crops like switch-grass and willow. There seems to be a choice of two methods here – using the same technology that Germany developed in the 1930’s and 40’s to convert coal into oil, using high temperature and special catalysts, or developing new enzymes based on the ones that fungi that live on tree stumps use. The former is expensive and as yet unproven on large scales.

What has all this got to do with nanotechnology? It is very easy to get excited by the prospect of a nano-enabled hydrogen economy powered by cheap, large area unconventional photovotaics. But we mustn’t forget that our techno-systems have a huge amount of inertia built into them. According to Vaclav Smil, there are more internal combustion engines than people in the USA, so potential solutions to our energy problems which promise less disruption to existing ways of doing things will be more attractive to many people than more technologically sophisticated but disruptive rival approaches.