New routes to solar energy: the UK announces more research cash

The agency primarily responsible for distributing government research money for nanotechnology in the UK, the Engineering and Physical Sciences Research Council, announced a pair of linked programmes today which substantially increase the funding available for research into new, nano-enabled routes for harnessing solar energy. The first of the Nanotechnology Grand Challenges, which form part of the EPSRC’s new nanotechnology strategy, is looking for large-scale, integrated projects exploiting nanotechnology to enable cheap, efficient and scalable ways to harvest solar energy, with an emphasis on new solar cell technology. The other call, Chemical and Biochemical Solar Energy Conversion, is focussed on biological fuel production, photochemical fuel production and the underpinning fundamental science that enables these processes. Between the two calls, around £8 million (~ US $16 million) is on offer in the first stage, with more promised for continuations of the most successful projects.

I wrote a month ago about the various ways in which nanotechnology might make solar energy, which has the potential to supply all the energy needs of the modern industrial world, more economically and practically viable. The oldest of these technologies – the dye sensitised nano-titania cell invented by EPFL’s Michael Grätzel – is now moving towards full production, with the company G24 Innovations having opened a factory in Wales, in partnership with Konarka. Other technologies such as polymer and hybrid solar cells need more work to become commercial.

Using solar energy to create, not electricity, but fuel, for example for transportation, is a related area of great promise. Some work is already going on developing analogues to photosynthetic systems for using light to split water into hydrogen. A truly grand challenge here would be to devise a system for photochemically reducing carbon dioxide. Think of a system in which one took carbon dioxide (perhaps from the atmosphere) and combined it with water with the aid of a couple of photons of light to make, say, methanol, which could directly be used in your internal combustion engine powered car. It’s possible in principle, one just has to find the right catalyst….

More on synthetic biology and nanotechnology

There’s a lot of interesting recent commentary about synthetic biology on Homunculus, the consistently interesting blog of the science writer Philip Ball. There’s lots more detail about the story of the first bacterial genome transplant that I referred to in my last post; his commentary on the story was published last week as a Nature News and Views article (subscription required).

Philip Ball was a participant in a recent symposium organised by the Kavli Foundation “The merging of bio and nano: towards cyborg cells”. The participants in this produced an interesting statement: A vision for the convergence of synthetic biology and nanotechnology. The signatories to this statement include some very eminent figures both from synthetic biology and from bionanotechnology, including Cees Dekker, Angela Belcher, Stephen Chu and John Glass. Although the statement is bullish on the potential of synthetic biology for addressing problems such as renewable energy and medicine, it is considerably more nuanced than the sorts of statements reported by the recent New York Times article.

The case for a linkage between synthetic biology and bionanotechnology is well made at the outset: “Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.” The writers divide the enabling technologies for synthetic biology into hardware and software. For this perspective on synthetic biology, which concentrates on the idea of reprogramming existing cells with synthetic genomes, the crucial hardware is the capability for cheap, accurate DNA synthesis, about which they write: “The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. “ This, of course, also has implications for the use of DNA as a building block for designed nanostructures and devices (see here for an example).

The authors are much more cautious on the software side. “Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. “

The new new thing

It’s fairly clear that nanotechnology is no longer the new new thing. A recent story in Business Week – Nanotech Disappoints in Europe – is not atypical. It takes its lead from the recent difficulties of the UK nanotech company Oxonica, which it describes as emblematic of the nanotechnology sector as a whole: “a story of early promise, huge hype, and dashed hopes.” Meanwhile, in the slightly neophilic world of the think-tanks, one detects the onset of a certain boredom with the subject. For example, Jack Stilgoe writes on the Demos blog “We have had huge fun running around in the nanoworld for the last three years. But there is a sense that, as the term ‘nanotechnology’ becomes less and less useful for describing the diversity of science that is being done, interesting challenges lie elsewhere… But where?”

Where indeed? A strong candidate for the next new new thing is surely synthetic biology. (This will not, of course, be new to regular Soft Machines readers, who will have read about it here two years ago). An article in the New York Times at the weekend gives a good summary of some of the claims. The trigger for the recent prominence of synthetic biology in the news is probably the recent announcement from the Craig Venter Institute of the first bacterial genome transplant. This refers to an advance paper in Science (abstract, subscription required for full article) by John Glass and coworkers. There are some interesting observations on this in a commentary (subscription required) in Science. It’s clear that much remains to be clarified about this experiment: “But the advance remains somewhat mysterious. Glass says he doesn’t fully understand why the genome transplant succeeded, and it’s not clear how applicable their technique will be to other microbes. “ The commentary from other scientists is interesting: “Microbial geneticist Antoine Danchin of the Pasteur Institute in Paris calls the experiment “an exceptional technical feat.” Yet, he laments, “many controls are missing.” And that has prevented Glass’s team, as well as independent scientists, from truly understanding how the introduced DNA takes over the host cell.”

The technical challenges of this new field haven’t prevented activists from drawing attention to its potential downsides. Those veterans of anti-nanotechnology campaigning, the ETC group, have issued a report on synthetic biology, Extreme Genetic Engineering, noting that “Today, scientists aren’t just mapping genomes and manipulating genes, they’re building life from scratch – and they’re doing it in the absence of societal debate and regulatory oversight”. Meanwhile, the Royal Society has issued a call for views on the subject.

Looking again at the NY Times article, one can perhaps detect some interesting parallels with the way the earlier nanotechnology debate unfolded. We see, for example, some fairly unrealistic expectations being raised: ““Grow a house” is on the to-do list of the M.I.T. Synthetic Biology Working Group, presumably meaning that an acorn might be reprogrammed to generate walls, oak floors and a roof instead of the usual trunk and branches. “Take over Mars. And then Venus. And then Earth” —the last items on this modest agenda.” And just as the radical predictions of nanotechnology were underpinned by what were in my view inappropriate analogies with mechanical engineering, much of the talk in synthetic biology is underpinned by explicit, but as yet unproven, parallels between cell biology and computer science: “Most people in synthetic biology are engineers who have invaded genetics. They have brought with them a vocabulary derived from circuit design and software development that they seek to impose on the softer substance of biology. They talk of modules — meaning networks of genes assembled to perform some standard function — and of “booting up” a cell with new DNA-based instructions, much the way someone gets a computer going.”

It will be interesting how the field of synthetic biology develops, to see whether it does a better of job of steering between overpromised benefits and overdramatised fears than nanotechnology arguably did. Meanwhile, nanotechnology won’t be going away. Even the sceptical Business Week article concluded that better times lay ahead as the focus in commercialising nanotechnology moved from simple applications of nanoparticles to more sophisticated applications of nanoscale devices: “Potentially even more important is the upcoming shift from nanotech materials to applications—especially in health care and pharmaceuticals. These are fields where Europe is historically strong and already has sophisticated business networks. “

The Nottingham nanotechnology and nanoscience centre

Today saw the official opening of the Nottingham nanotechnology and nanoscience centre, which brings together some existing strong research areas across the University. I’ve made the short journey down the motorway from Sheffield to listen to a very high quality program of talks, with Sir Harry Kroto, co-discoverer of buckminster fullerene, taking the top of the bill. Also speaking were Don Eigler, from IBM (the originator of perhaps the most iconic image in all nanotechnology, the IBM logo made from individual atoms) Colin Humphreys, from the University of Cambridge, and Sir Fraser Stoddart, from UCLA.

There were some common themes in the first two talks (common, also, with Wade Adams’s talk in Norway described below). Both talked about the great problems of the world, and looked to nanotechnology to solve them. For Colin Humphries, the solutions to problems of sustainable energy and clean water are to be found in the material gallium nitride, or precisely in the compounds of aluminium, indium and gallium nitride which allow one to make, not just blue light emitting diodes, but LEDs that can emit light of any wavelength between the infra-red and the deep ultra-violet. Gallium nitride based blue LEDs were invented as recently as 1996 by Shuji Nakamura, but this is already a $4 billion market, and everyone will be familiar with torches and bicycle lights using them.

How can this help the problem of access to clean drinking water? We should remind ourselves that 10% of world child mortality is directly related to poor water quality, and half the hospital beds in the world occupied by people with water related diseases. One solution would be to use deep ultraviolet to sterilise contaminated water. Deep UV works well for sterilisation because biological organisms never developed a tolerance to these waves, which don’t penetrate the atmosphere. UV at a wavelength of 270 nm does the job well, but existing lamps are not practical because they need high voltages and are not efficient, and also some use mercury. AlGaN LEDS work well, and in principle they could be powered by solar cells at 4 V, which might allow every household to sterilise its water supply easily and cheaply. The problem is efficiency is too low for flowing water. At blue wavelengths (400 nm) efficiency is very good at 70%, but it drops precipitously at smaller wavelengths, and this is not yet understood theoretically.

The contribution of solid state lighting to the energy crisis arises from the efficiency of LEDs compared to tungsten light bulbs. People often underestimate the amount of energy used in lighting domestic and commercial buildings. Globally, it accounts for 1,900 megatonnes of CO2; this is 70% of the total emissions from cars, and three times the amount due to aviation. In the UK, it amounts to 20% of electricity generated, and in Thailand, for example, it is even more, at 40%. But tungsten light bulbs, which account for 79% of sales, have an efficiency of only 5%. There is much talk now of banning tungsten light bulbs, but the replacement, fluorescent lights, is not perfect either. Compact fluorescents have an efficiency of 15%, which is an improvement, but what is less well appreciated is that each bulb contains 4 mg of mercury. This would lead to tonnes of mercury ending up in landfills if tungsten bulbs were replaced by compact fluorescents.

Could solid-state lighting do the job? Currently what you can buy are blue LEDs (made from InGaN) which excite a yellow phosphor. The colour balance of these leaves something to be desired, and soon we will see blue or UV LEDs exciting red/green/blue phosphors which will have a much better colour balance (you could also use a combination of red, green and blue LEDs, but currently green efficiencies are too low). The best efficiency in a commercial white LED is 30% (from Seoul Semiconductor), but the best in the lab (Nichia) is currently 50%. The target is an efficiency of 50-80% at high drive currents, which puts them at a higher efficiency than the current most efficient light, sodium lamps, whose familiar orange glow converts electricity at 45% efficiency. This target would make them 10 times more efficient than filaments, 3 times more efficient than compact fluorescents and with no mercury. In the US the 50% replacement of filaments would save 41 GW, in the UK 100% replacement would save 8 GW of power station capacity. The problem at the moment is cost, but the rapidity of progress in this area means that Humphries is confident that within a few years costs will fall dramatically.

Don Eigler also talked about societal challenges, but with a somewhat different emphasis. His talk was entitled “Nanotechnology: the challenge of a new frontier”. The questions he asked were “What challenges do we face as a society in dealing with this new frontier of nanotechnology, and wow should we as a society make decisions about a new technology like nanotechnology?”

There are three types of nanotechnology, he said: evolutionary nanotechnology (historically larger technologies that have been shrunk to nanoscale dimensions), revolutionary nanotechnology (entirely new nanometer-scale technologies) and natural nanotechnology (cell biology, offering inspirations for our own technologies). Evolutionary nanotechnologies include semiconductors, nanoparticles in cosmetics. Revolutionary nanotechnologies include carbon nanotubes, for potential new logic structures that might supplant silicon, and the IBM millipede data storage system. Natural nanotechnologies include bacterial flagellar motors.

Nanohysteria comes into different varieties too. Type 1 nanohysteria is represented by greed driven “irrational exuberance”, and is based on the idea that nanotechnology will change everything very soon, as touted by investment tipsters and consultants who want to take people’s money off them. What’s wrong with this is the absence of critical thought. Type 2 nanohysteria is the opposite – fear driven irrational paranoia exemplified by the grey goo scenario of out of control self-replicating molecular assemblers or nanobots. What’s wrong with this is again, the absence of critical thought. Prediction is difficult, but Eigler thinks that self-replicating nanobots are not going to happen any time soon, if ever.

What else do people fear about nanotechnology? Eigler recently met a young person with strong views, that nanotech is scary, it will harm the biosphere, it will create new weapons, it is being driven by greedy individuals and corporations, in summary it is not just wrong, it is evil. Where did these ideas come from? If you look on the web – you see talk of superweapons made from molecular assemblers. What you don’t find on the web are statements like “My grandmother is still alive today because nanotechnology saved her life”. Why is this? Nanotechnology has not yet provided a tangible benefit to grandmothers!

Some candidates include gold nanoshell cancer therapy, as developed by Naomi Halas at Rice. This particular therapy may not work in humans, but something similar will. Another example is the work of Sam Stupp at Northwestern, making nanofibers that cause neural progenitor cells turn into new neurons, not scar tissue, holding out the hope of regenerative medicine to repair spinal cord damage.

As an example of wrong conclusions, Eigler made the smallest logic circuit, 12nm by 17 nm, made from carbon monoxide. But carbon monoxide is a deadly poison – shouldn’t we worry about this? Let’s do the sum – 18 CO molecules are needed for one transistor. The context is that I breathe 2 billion trillion molecules a day, so every day I breathe enough to make 160 million computers.

What could the green side of nanotechnology be? We could have better materials, that are lighter, stronger and more easily recyclable, and this will reduce energy consumption. Perhaps we can use nanotechnology to reduce consumption of natural resources and helping recycling. We can’t prove yet that these good benefits will follow, but Eigler believes they are likely.

There is a real risk of nanotechnology, if it is used without evaluating the consequences. The widespread introduction of nanoparticulates into the environment would be an example of this. So how do we now if something is safe? We need to think it through, but we can’t guarantee absolutely that anything can be absolutely safe. The principles should be that we eliminate fantasies, understand the different motivations that people have, and honestly assess risk and benefit. We need informed discussion, that is critical, creative, inclusive and respectful. We need to speak with knowledge and respect, and listen with zeal. Scientists have not always been good at this and we need to get much better. Our best weapons are our traditions of rigorous honesty and our tolerance for diverse beliefs.

Where should I go to study nanotechnology?

The following is a message from my sponsor… or at least, the institution that pays my salary…

What advice should one give to young people who wish to make a career in nanotechnology? It’s a very technical subject, so you won’t generally get very far without a good degree level grounding in the basic, underlying science and technology. There are some places where one can study for a first degree in nanotechnology, but in my opinion it’s better to obtain a good first degree in one of the basic disciplines – whether a pure science, like physics or chemistry, or an engineering specialism, like electronic engineering or materials science. Then one can broaden one’s education at the postgraduate level, to get the essential interdisciplinary skills that are vital to make progress in nanotechnology. Finally, of course, one usually needs the hands-on experience of research that most people obtain through the apprenticeship of a PhD.

In the UK, the first comprehensive, Masters-level course in Nanoscale Science and Technology was developed jointly by the Universities of Leeds and Sheffield (I was one of the founders of the course). As the subject has developed and the course has flourished, it has been expanded to offer a range of different options – the Nanotechnology Education Portfolio – nanofolio. Currently, we offer MSc courses in Nanoscale Science and Technology (the original, covering the whole gamut of nanotechnology from the soft to the hard), Nanoelectronics and nanomechanics, Nanomaterials for nanoengineering and Bionanotechnology.

The course website also has a general section of resources that we hope will be useful to anybody interested in nanotechnology, beginning with the all-important question “What is nanotechnology?” Many more resources, including images and videos, will be added to the site over the coming months.

Nanoscale swimmers

If you were able to make a nanoscale submarine to fulfill the classic “Fantastic Voyage” scenario of swimming through the bloodstream, how would you power and steer it? As readers of my book “Soft Machines” will know, our intuitions are very unreliable guides to the environment in the wet nanoscale world, and the design principles that would be appropriate on the human scale simply won’t work on the nanoscale. Swimming is good example; on small scales water behaves, not as the free flowing liquid we are used to on the human scale; viscosity is much more important on small scales. To get a feel for what it would be like to try and swim on the nanoscale, one has to imagine trying to swim in the most viscous molasses. In my group we’ve been doing some experiments to demonstrate the realisation of one scheme to make a nanoscale object swim, the results of which are summarised in this preprint (PDF), “Self-motile colloidal particles: from directed propulsion to random walk”.

The brilliantly simple idea underlying these experiments was thought up by my colleague and co-author, Ramin Golestanian, together with his fellow theoretical physicists Tannie Liverpool and Armand Adjari, and was analysed theoretically in a recent paper in Physical Review Letters, “Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products” (abstract here, subscription required for full paper). If one has a particle that has a patch of catalyst on one side, and that catalyst drives a reaction that produces more product molecules than it consumes in fuel molecules, then the particle will end up in a solution that is more concentrated on one side than the other. This leads to an osmotic pressure gradient, which in turn results in a force that pushes the particle along.

Jon Howse, a postdoc working in my group, has made an experimental system that realises this theoretical scheme. He coated micron-sized polystyrene particles, on one side only, with platinum. This catalyses the reaction by which hydrogen peroxide is broken down into water and oxygen. For every two hydrogen peroxide molecules that take part in the reaction, two water molecules and one oxygen molecule results. Using optical microscopy, he tracked the motion of particles in four different situations. In three of these situations – with control particles, uncoated with platinum, in both water and hydrogen peroxide solution, and with coated particles in hydrogen peroxide solution, he found identical results – the expected Brownian motion of a micron-sized particle. But when the coated particles were put in hydrogen peroxide, the particles clearly moved further and faster.

Detailed analysis of the particle motion showed that, in addition to the Brownian motion that all micro-size particles must be subject to, the propelled particles moved with a velocity that depended on the concentration of the hydrogen peroxide fuel – the more fuel that was present, the faster they went. But Brownian motion is still present, and it has an important effect even on the fastest propelled particles. Brownian motion makes particles rotate randomly as well as jiggle around, so the propelled particles don’t go in straight lines. In fact, at longer times the effect of the random rotation is to make the particles revert to a random walk, albeit one in which the step length is essentially the propulsion velocity multiplied by the characteristic time for rotational diffusion. This kind of motion has an interesting analogy to the kind of motion bacteria do when they are swimming. Bacteria, if they are trying to swim towards food, don’t simply swing the rudder round and propel themselves directly towards it. Like our particles, they are actually doing a kind of random walk in which stretches of straight-line motion are interrupted by episodes in which they change direction – this kind of motion has been called run and tumble motion. Counterintuitively, it seems that this is a better strategy for getting around in the nanoscale world, in which the random jostling of Brownian motion is unavoidable. What the bacteria do is change the length of time for which they are moving in a straight line according to whether they are getting closer to or further away from their food source. If we could do the same trick in our synthetic system, of changing the length of the run time, then that would suggest a strategy for steering our nanoscale submarines, as well as propelling them.

Brain chips

There can be few more potent ideas in futurology and science fiction than that of the brain chip – a direct interface between the biological information processing systems of the brain and nervous system and the artificial information processing systems of microprocessors and silicon electronics. It’s an idea that underlies science fiction notions of “jacking in” to cyberspace, or uploading ones brain, but it also provides hope to the severely disabled that lost functions and senses might be restored. It’s one of the central notions in the idea of human enhancement. Perhaps through a brain chip one might increase ones cognitive power in some way, or have direct access to massive banks of data. Because of the potency of the idea, even the crudest scientific developments tend to be reported in the most breathless terms. Stripping away some of the wishful thinking, what are the real prospects for this kind of technology?

The basic operations of the nervous system are pretty well understood, even if the way the complexities of higher level information processing work remain obscure, and the problem of consciousness is a truly deep mystery. The basic units of the nervous system are the highly specialised, excitable cells called neurons. Information is carried long distances by the propagation of pulses of voltage along long extensions of the cell called axons, and transferred between different neurons at junctions called synapses. Although the pulses carrying information are electrical in character, they are very different from the electrical signals carried in wires or through semiconductor devices. They arise from the fact that the contents of the cell are kept out of equilibrium with their surroundings by pumps which selectively transport charged ions across the cell membrane, resulting in a voltage across the membrane. This voltage can be relaxed when channels in the membrane, which are triggered by changes in voltage, open up. The information carrying impulse is actually a shock wave of reduced membrane potential, enabled by transport of ions through the membrane.

To find out what is going on inside a neuron, one needs to be able to measure the electrochemical potential across the membrane. Classically, this is done by inserting an electrochemical electrode into the interior of the nerve cell. The original work, carried out by Hodgkin, Huxley and oters in the 50’s, used squid neurons, because they are particularly large and easy to handle. So, in principle one could get a readout of the state of a human brain by measuring the potential at a representative series of points in each of its neurons. The problem, of course, is that there are a phenomenal number of neurons to be studied – around 20 billion in a human brain. Current technology has managed to miniaturise electrodes and pack them in quite dense arrays, allowing the simultaneous study of many neurons. A recent paper (Custom-designed high-density conformal planar multielectrode arrays for brain slice electrophysiology, PDF)) from Ted Berger’s group at the University of Southern California shows a good example of the state of the art – this has electrodes with 28 µm diameter, separated by 50 µm, in an array of 64 electrodes. These electrodes can both read the state of the neuron, and stimulate it. This kind of electrode array forms the basis of brain interfaces that are close to clinical trials – for example the BrainGate product.

In a rather different class from these direct, but invasive probes of nervous system activity at the single neutron level, there are some powerful, but indirect measures of brain activity, such as functional magnetic resonance imaging or positron emission tomography. These don’t directly measure the electrical activity of neurons, either individually or in groups; instead they rely on the fact that thinking is hard work (literally) and locally raises the rate of metabolism. Functional MRI and PET allow one to localise nervous activity to within a few cubic millimeters, which is hugely revealing in terms of identifying which parts of the brain are involved in which kind of mental activity, but which remains a long way away from the goal of unpicking the brain’s activity at the level of neurons.

There is another approach does probe activity at the single neuron level, but doesn’t feature the invasive procedure of inserting an electrode into the nerve itself. These are the neuron-silicon transistors developed in particular by Peter Fromherz at the Max Planck Institute for Biochemistry. These really are nerve chips, in that there is a direct interface between neurons and silicon microelectronics of the sort that can be highly miniaturised and integrated. On the other hand, these methods are currently restricted to operate in two dimensions, and require careful control of the growing medium that seems to rule out, or at least present big problems for, in-vivo use.

The central ingredient of this approach is a field effect transistor which is gated by the excitation of a nerve cell in contact with it (i.e., the current passed between the source and drain contacts of the transistor strongly depends on the voltage state of the membrane in proximity to the insulating gate dielectric layer). This provides a read-out of the state of a neuron; input to the neurons can also be made by capacitors, which can be made on the same chip. The basic idea was established 10 years ago – see for example Two-Way Silicon-Neuron Interface by Electrical Induction. The strength of this approach is that it is entirely compatible with the powerful methods of miniaturisation and integration of CMOS planar electronics. In more recent work, an individual mammalian cell can be probed “Signal Transmission from Individual Mammalian Nerve Cell to Field-Effect Transistor” (Small, 1 p 206 (2004), subscription required), and an integrated circuit with 16384 probes, capable of probing a neural network with a resolution of 7.8 µm has been built “Electrical imaging of neuronal activity by multi-transistor-array (MTA) recording at 7.8 µm resolution” (abstract, subscription required for full article).

Fromherz’s group have demonstrated two types of hybrid silicon/neuron circuits (see, for example, this review “Electrical Interfacing of Nerve Cells and Semiconductor Chips”, abstract, subscription required for full article). One circuit is a prototype for a neural prosthesis – an input from a neuron is read by the silicon electronics, which does some information processing and then outputs a signal to another neuron. Another, inverse, circuit is a prototype of a neural memory on a chip. Here there’s an input from silicon to a neuron, which is connected to another neuron by a synapse. This second neuron makes its output to silicon. This allows one to use the basic mechanism of neural memory – the fact that the strength of the connection at the synapse can be modified by the type of signals it has transmitted in the past – in conjunction with silicon electronics.

This is all very exciting, but Fromherz cautiously writes: “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.” Among the practical problems are the fact that it seems difficult to extend the method into in-vivo applications, it is restricted to two dimensions, and the spatial resolution is still quite large.

Pushing down to smaller sizes is, of course, the province of nanotechnology, and there are a couple of interesting and suggestive recent papers which suggest directions that this might go in the future.

Charles Lieber at Harvard has taken the basic idea of the neuron gated field effect transistor, and executed it using FETs made from silicon nanowires. A paper published last year in Science – Detection, Stimulation, and Inhibition of Neuronal Signals with High-Density Nanowire Transistor Arrays (abstract, subscription needed for full article) – demonstrated that this method permits the excitation and detection of signals from a single neuron with a resolution of 20 nm. This is enough to follow the progress of a nerve impulse along an axon. This gives a picture of what’s going on inside a living neuron with unprecendented resolution. But it’s still restricted to systems in two dimensions, and it only works when one has cultured the neurons one is studying.

Is there any prospect, then, of mapping out in a non-invasive way the activity of a living brain at the level of single neurons? This still looks a long way off. A paper from the group of Rodolfo Llinas at the NYU School of Medicine makes an ambitious proposal. The paper – Neuro-vascular central nervous recording/stimulating system: Using nanotechnology probes (Journal of Nanoparticle Research (2005) 7: 111–127, subscription only) – points out that if one could detect neural activity using probes within the capillaries that supply oxygen and nutrients to the brain’s neurons, one would be able to reach right into the brain with minimal disturbance. They have demonstrated the principle in-vitro using a 0.6 µm platinum electrode inserted into one of the capillaries supplying the neurons in the spinal cord. Their proposal is to further miniaturise the probe using 200 nm diameter polymer nanowires, and they further suggest making the probe steerable using electrically stimulated shape changes – “We are developing a steerable form of the conducting polymer nanowires. This would allow us to steer the nanowire-probe selectively into desired blood vessels, thus creating the first true steerable nano-endoscope.” Of course, even one steerable nano-endoscope is still a long way from sampling a significant fraction of the 25 km of capillaries that service the brain.

So, in some senses the brain chip is already with us. But there’s a continuum of complexity and sophisitication of such devices, and we’re still a long way from the science fiction vision of brain downloading. In the sense of creating an interface between the brain and the world, that is clearly possible now and has in some form been realised. Hybrid structures which combine the information processing capabilities of silicon electronics and nerve cells cultured outside the body are very close. But a full, two-way integration of the brain and artificial information processing systems remains a long way off.

Integrating nanosensors and microelectronics

One of the most talked-about near term applications of nanotechnology is in in nanosensors – devices which can detect the presence of specific molecules at very low concentrations. There are some obvious applications in medicine; one can imagine tiny sensors implanted in one’s body, which continuously monitor the concentration of critical biochemicals, or the presence of toxins and pathogens, allowing immediate corrective action to be taken. A paper in this week’s edition of Nature (editor’s summary here, subscription required for full article) reports an important step forward – a nanosensor made using a process that is compatible with the standard methods for making integrated circuits (CMOS). This makes it much easier to imagine putting these nanosensors into production and incorporating them in reliable, easy to use systems.

The paper comes from Mark Reed’s group at Yale. The fundamental principle is not new – the idea is that one applies a voltage across a very thin semiconductor nanowire. If molecules adsorb at the interface between the nanowire and the solution, there is a change in electrical charge at the interface. This creates an electric field which has the effect of changing the electrical conductivity of the nanowire; the amount of current flowing through the wire then tells you about how many molecules have stuck to the surface. By coating the surface with molecules that specifically stick to the chemical that one wants to look for, one can make the sensor specific for that chemical. Clearly, the thinner the wire, the more effect the surface has in proportion, hence the need to use nanowires to make very sensitive sensors.

In the past, though, such nanowire sensors have been made by chemical processes, and then painstakingly wiring them up to the necessary micro-circuit. What the Reed group has done is devised a way of making the nanowire in-situ on the same silicon wafer that is used to make the rest of the circuitry, using the standard techniques that are used to make microprocessors. This makes it possible to envisage scaling up production of these sensors to something like a commercial scale, and integrating them a complete electronic system.

How sensitive are these devices? In a test case, using a very well known protein-receptor interaction, they were able to detect a specific protein at a concentration of 10 fM – that translates to 6 billion molecules per litre. As expected, small sensors are more sensitive than large ones; a typical small sensor had a nanowire 50 nm wide and 25 nm thick. From the published micrograph, the total size of the sensor is about 20 microns by 45 microns.

The pharmaceutical nanofactory

Drug delivery is becoming one of the most often cited application of nanotechnology in the medical arena. For the kind of very toxic molecules that are used in cancer therapy, for example, substantial increases in effectiveness, and reductions in side-effects, can be obtained by wrapping up the molecule in a protective wrapper – a liposome, for example – which isolates the molecule from the body until it reaches its target. Drug delivery systems of this kind are already in clinical use, as I discussed here. But what if, instead of making these drugs in a pharmaceutical factory and wrapping them up in the nanoscale container for injection into the body, you put the factory in the delivery device, and synthesised the drug when it was needed, where it was needed, inside the body? This intriguing possibility is discussed in a commentary (subscription probably required) in the January issue of Nature Nanotechnology. This article is itself based on a discussion held at a National Academies Keck Futures Initiative Conference, which is summarised here.

One of the reasons for wanting to do this is to be able to make drug molecules that aren’t stable enough to be synthesised in the usual way. In a related problem, such a medical nanofactory might be used to help the body dispose of molecules it can’t otherwise process – one example the authors give is the condition phenylketonuria, a relatively common condition in which the amino acid phenylalanine, instead of being converted to tyrosine, is converted to phenylpyruvic acid, the accumulation of which causes incurable brain damage.

What might one need to achieve this goal? The first requirement is a container to separate the chemical machinery from the body. The most likely candidates for such a container are probably polymersomes, robust spherical containers self-assembled from block copolymers. The other requirements for the nanofactory are perhaps less easy to fulfill; one needs ways of getting chemicals in and out of the nanofactory, one needs sensing functions on the outside to tell the nanofactory when it needs to start production, one needs the apparatus to do the chemistry (perhaps a system of enzymes or other catalysts), one needs to be able to target the nanofactory to where one needs it, and finally, one needs to ensure that the nanofactory can be safely disposed of when it has done its work. Cell biology suggests ways to approach some of these requirements, for example one can imagine analogues to the pores and channels which transport molecules through cell membranes. None of this will be easy, but the authors suggest that it would constitute “a platform technology for a variety of therapeutic approaches”.

Playing God

I went to the Avignon nanoethics conference with every intention of giving a blow-by-blow account of the meeting as it happened, but in the end it was so rich and interesting that it took all my attention to listen and contribute. Having got back, it’s the usual rush to finish everything before the holidays. So here’s just one, rather striking, vignette from the meeting.

The issue that always bubbles below the surface when one talks about self-assembly and self-organisation is whether we will be able to make something that could be described as artificial life. In the self-assembly session, this was made very explicit by Mark Bedau, the co-founder of the European Center for Living Technology and participant in the EU funded project PACE (Programmable Artificial Cell Evolution), whose aim is to make an entirely synthetic system that shares some of the fundamental characteristics of living organisms (e.g. metabolism, reproduction and evolution). The Harvard chemist George Whitesides, (who was sounding more and more the world-weary patrician New Englander) described the chances of this programme being successful as being precisely zero.

I sided with Bedau on this, but what was more surprising to me was the reaction of the philosophers and ethicists to this pessimistic conclusion. Jean-Pierre Dupuy, a philosopher who has expressed profound alarm at the implications of loss of control implied by the idea of exploiting self-organising systems in technology, said that, despite all his worries, he would be deeply disappointed if this conclusion was true. A number of people commented on the obvious fear that people would express that making synthetic life would be tantamount to “playing God”. One speaker talked about the Jewish traditions connected with the Golem to insist that in that tradition the aspiration to make life was by itself not necessarily wrong. And, perhaps even more surprisingly, the bioethicist William Hurlbut, a member of the (US) President’s Council on Bioethics and a prominent Christian bioconservative, also didn’t take a very strong position on the ethics of attempting to make something with the qualities of life. Of course, as we were reminded by the philosopher and historian of science Bernadette Bensaude-Vincent, there have been plenty of times in the past when scientists have proclaimed that they were on the verge of creating life, only for this claim to turn out to be very premature.