Forthcoming nano events in Sheffield

A couple of forthcoming events might interest nano-enthusiasts at a loose end in South Yorkshire in the next few weeks. Next Monday at 7pm, there’s a public lecture as part of National Science Week in the Crucible Theatre, called “A robot in the blood”. In it, my colleagues Tony Ryan and Noel Sharkey, will discuss what a real medical nanobot might look like. Both are accomplished public performers – Tony Ryan is a chemist (with whom I collaborate extensively) who gave the Royal Institution Christmas lectures a couple of years ago, and Noel Sharkey is an engineer and roboticist who regularly appears in the TV program “Robot Wars”.

Looking further ahead, on Monday April 3rd there is a one day meeting about “Nanotechnology in Society: The wider issues”. This will involve talks from commentators on nanotechnology from different view points, followed by a debate. Speakers include Olaf Bayer, from the campaigning group Corporate Watch, Jack Stilgoe, from the public policy thinktank Demos, Stephen Wood, co-author (with me and Alison Geldart) of the Economic and Social Reseach Council report “The Social and Economic Challenges of Nanotechnology”, and Rob Doubleday, a social scientist working in the Cambridge Nanoscience Centre. The day is primarily intended for the students of our Masters course in Nanoscale Science and Technology, but anyone interested is welcome to attend; please register in advance as described here.

How much should we worry about bionanotechnology?

We should be very worried indeed about bionanotechnology, according to Alan Goldstein, a biomaterials scientist from Alfred University, who has written a long article called I Nanobot on this theme in the online magazine Salon.com. According to this article, we are stumbling into creating a new form of life, which is, naturally, out of our control. “And Prometheus has returned. His new screen name is nanobiotechnology.” I think that some very serious ethical issues will be raised by bionanotechnology and synthetic biology as they develop. But this article is not a good start to the discussion; when you cut through Goldstein’s overwrought and overheated writing, quite a lot of what he says is just wrong.

Goldstein makes a few interesting and worthwhile points. Life isn’t just about information, you have to have metabolism too. A virus isn’t truly alive, because it consists only of information – it has to borrow a metabolism from the host it parasitises to reproduce. And our familiarity with one form of life – our form, based on DNA for information storage, proteins for metabolic function, and RNA to intercede between information and metabolism – means that we’re too unimaginative about conceiving entirely alien types of life. But the examples he gives of potentially novel, man-made forms of life reveal some very deep misconceptions about how life itself, at its most abstract, works.

I don’t think Goldstein really understands the distinction between equilibrium self-assembly, by which lipid molecules form vesicles, for example, and the fundamentally out-of-equilibrium character of the self-organisation characteristic of living things. I am literally not the same person I was when I was twenty; living organisms are constantly turning over the molecules they are made from; the patterns persist, but the molecules that make up the pattern are constantly changing. So his notion that if we make an anti-cancer drug delivery device with an antibody that targets a certain molecule on a cell wall, then that device will stay stuck there through the lifetime of the organism, and if it finds its way to a germ cell it will be passed down from generation to generation like a retrovirus, is completely implausible. The molecule that it’s stuck to will soon be turned over, the device itself will be similarly transient. It’s because the device lacks a way to store the information that would be needed to continually regenerate itself that it can’t be considered in any sensible way living.

If rogue, powered vesicles lodging in our sperm and egg cells aren’t scary enough, Goldstein next invokes the possibility of the meddling with the spark of life itself – electricity. But the moment we close that nano-switch and allow electron current to flow between living and nonliving matter, we open the nano-door to new forms of living chemistry — shattering the “carbon barrier.” This is, without doubt, the most momentous scientific development since the invention of nuclear weapons.” This sounds serious, but it seems to be founded on a misconception of how biology uses electricity. Our cells burn sugar, Goldstein says, which “yields high-energy electrons that are the anima of the living state. “ Again, this is highly misleading. The energy currency of biology isn’t electricity, it’s chemistry – specifically it’s the energy containing molecule ATP. And when electrical signals are transmitted, through our nerves, or to make our heart work, it isn’t electrons that are moving, it’s ions. Goldstein makes a big deal out of the idea of a Biomolecule-to-Material interface between a nanofabricated pacemaker and the biological pacemaker cells of the heart. “A nanofabricated pacemaker with a true BTM interface will feed electrons from an implanted nanoscale device directly into electron-conducting biomolecules that are naturally embedded in the membrane of the pacemaker cells. There will be no noise across this type of interface. Electrons will only flow if the living and nonliving materials are hard-wired together. In this sense, the system can be said to have functional self-awareness: Each side of the BTM interface has an operational knowledge of the other.” This sounds like a profound and disturbing blurring of the line between the artificial and the biological. The only trouble is, it’s based on a simple error. Pacemaker cells don’t have electron-conducting biomolecules embedded in their membranes; the membrane potentials are set up and relaxed by the flow of ions through ion channels. There can be no direct interface of the kind that Goldstein describes. Of course, we can and do make artificial interfaces between organisms and artefacts – the artificial pacemakers that Goldstein mentions are one example, and cochlear implants are another. The increasing use of this kind of interface between artefacts and human beings does already raise ethical and philosophical issues, but discussion of these isn’t helped by this kind of mysticism built on misconception.

In an attempt to find an abstract definition of life, Goldstein revives a hoary old error about the relationship between the second law of thermodynamics and life: “The second law of thermodynamics tells us that all natural systems move spontaneously toward maximum entropy. By literally assembling itself from thin air, biological life appears to be the lone exception to this law. “ As I spent several lectures explaining to my first year physics students last semester, what the second law of thermodynamics says is that isolated systems tend to maximum entropy. Systems that can exchange energy with their surroundings are bound only by the weaker constraint that as they change, the total entropy of the universe must not decrease. If a lake freezes, the entropy of the water decreases, but as the ice forms it expels heat which raises the entropy of its surroundings by at least as much as its own entropy decreases. Biology is no different, trading local decreases of entropy for global increases. Goldstein does at least concede this point, noting that “geodes are not alive”, but he then goes on to say that “nanomachines could even be designed to use self-assembly to replicate”. This statement, at least, is half-true; self-assembly is one of the most important design principles used by biology and it’s increasingly being exploited in nanotechnology too. But self-assembly is not, in itself, biology – it’s a tool used by biology. A system that is organised purely by equilibrium self-assembly is moving towards thermodynamic equilibrium, and things that are at equilibrium are dead.

The problem at the heart of this article is that in insisting that life is not about DNA, but metabolism, Goldstein has thrown the baby out with the bathwater. Life isn’t just about information, but it needs information in order to be able to replicate, and most centrally, it needs some way of storing information in order to evolve. It’s true that that information could be carried in other vehicles than DNA, and it need not necessarily be encoded by a sequence of monomers in a macromolecule. I believe that it might in principle be possible in the future to build an artificial system that does fulfill some general definition of life. I agree that this would constitute a dramatic scientific development that would have far-reaching implications that should be discussed well in advance. But I don’t think it’s doing anyone a service to overstate the significance of the developments in nanobiotechnology that we are seeing at the moment, and I think that scientists commenting on these issues do have some obligation to maintain some standards of scientific accuracy.

Taking the high road to large scale solar power

In principle there’s more than enough sunlight falling on the earth to meet all our energy needs in a sustainable way, but the prospects for large scale solar energy are dimmed by a dilemma. We have very efficient solar cells made from conventional semiconductors, but they are too expensive and difficult to manufacture in very large areas to make a big dent in our energy needs. On the other hand, there are prospects for unconventional solar cells – Graetzel cells or polymer photovoltaics – which can perhaps be made cheaply in large areas, but whose efficiencies and lifetimes are too low. In an article in this month’s Nature Materials (abstract, subscription required for full article, see also this press release), Imperial College’s Keith Barnham suggests a way out of the dilemma.

The efficiencies of the best solar cells available today exceed 30%, and there is every reason to suppose that this figure can be substantially increased with more research. These solar cells are based, not on crystalline silicon, like standard solar cell modules, but on carefully nanostructured compound semiconductors like gallium arsenide (III-V semiconductors, in the jargon). By building up complex layered structures it is possible efficiently to harvest the energy of light of all wavelengths. The problem is that these solar cells are expensive to make, relying on sophisticated techniques for building up different semiconductor layers, like molecular beam epitaxy, and currently are generally only used for applications where cost doesn’t matter, such as on satellites. Barnham argues that the cost disadvantage can be overcome by combining these efficient solar cells with low-cost systems for concentrating sunlight – in his words “our answer to this particular problem is ‘Smart Windows’, which use small, transparent plastic lenses that track the sun and act as effective blinds for the direct sunlight, when combined with innovative light collectors and small 3rd-generation cells,” and he adds “Even in London a system like this would enable a typical office behind a south-facing wall to be electrically self-sufficient.”

Even with conventional technologies, Barnham calculates that if all roofs and south-facing walls were covered in solar cells this would represent three times the total generating capacity of the UK’s current nuclear program – that is, 36 GW. This represents a really substantial dent in the energy needs of the UK, and if we believe Barnham’s calculation that his system would deliver about three times as much energy as conventional solar cells, this represents pretty much a complete solution to our energy problems. What is absent from the article, though, is an estimate of the total production capacity that’s likely to be achievable, merely observing that the UK semiconductor industry has substantial spare capacity after the telecoms downturn. This is the missing calculation that needs to be done before we can accept Barnham’s optimism.

Nanoscience in the European Research Area

Most research in Europe, in nanotechnology or any other field, is not funded by the European Union. Somewhere between 90% and 95% of research funding comes from national research agencies, working with their own procedures, to their own national priorities. This bothers some people, who see this as yet another example of the way in which Europe doesn’t get its act together and thus fails to live up to its potential. In research, the European Commission fears that, compared to rivals in the USA or the far east, European efforts suffer from fragmentation and duplication. Their solution is the concept of the “European Research Area”, in which different national funding agencies work to create a joint approach to funding, as well as doing what they can to ensure free movement of researchers and ideas across the continent. As part of this initiative, national research agencies have come together to form thematic networks. Nanoscience has such a network, and it is meeting this week in Amsterdam to finalise the details of a joint funding call on the theme of singly addressable nanoscale objects.

Another way of looking at the issue of the many different approaches used in funding nanoscience across Europe is that this gives us a laboratory of different approaches, a kind of controlled experiment in science funding models. Yesterday’s meeting was devoted to series of overviews of the national nanoscience landscape in each country. This was instructive and contrasting; among the large countries one had the German approach, with major groups across the country being supported with really substantial infrastructure. The French had most logical and comprehensive overall plan, while the talk describing the British effort (given by me) couldn’t entirely hide its ad-hoc and largely unplanned character. The presentations from smaller countries varied from really rather impressive displays of focused activities (from the Netherlands, Finland and Austria in particular), to more aspirational talks from countries like Portugal and Slovakia.

How do the European nations rank in nanoscience? The undisputed leader is clearly Germany, with France and the UK vying for second place. Readers of this blog will know that I’m suspicious of bibliometric measures, but some interesting data was shown showing France second and the UK third by total numbers of nanoscience papers, but with that order being reversed when only highly cited papers were considered. But the efforts of the rich, smaller European countries are very significant; these are countries with high per person GDP figures which typically spend a higher proportion of GDP on research than larger countries. They combine this with a very focused and targeted approach to the science they support. The Netherlands, in particular, looks very strong indeed in those areas that it has chosen to concentrate on.

Computing with molecules

It’s easy to forget that, looking at biology as a whole, computing and information processing is more often done by individual molecules than by brains and nervous systems. After all, most organisms don’t have a nervous system at all, yet they still manage to sense their environment and respond to what they discover. And a multi-cellular organism is itself a colony of many differentiated cells, all of which need to communicate and cooperate in order for the organism to function at all. In these processes, signals are communicated not by electrical pulses, but by the physical movement of molecules, and logic is performed, not by circuits of transistors, but by enzymes. Modern systems biology is just starting to unravel the operation of these complex and effective chemical computers, but we’re very far from being able to build anything like them with our currently available nanotechnology.

A news story on the New Scientist website (seen via Martyn Amos’s blog) reports an interesting step along the way, with an experimental demonstration of an enzyme-based system that chemically implements simple logic operations like a half-adder and a half-subtracter. The report, from Itamar Willner’s group at the Hebrew University of Jerusalem, is published in Angewandte Chemie International Edition (abstract here, subscription required for full paper). No-one is going to be doing complicated sums with these devices for a while; the inputs are provided by supplying certain chemical species (glucose and hydrogen peroxide, in this case), and the answers are provided by the appearance or non-appearance of reaction products. But where this system could come in useful is in providing a nanoscale system like a drug delivery device with some rudimentary mechanisms for sensing the environment and acting on the information, maybe by swimming towards the source of some chemicals or releasing its contents when it has detected some combination of chemicals around it.

This is still not quite a fully synthetic analogue of a cellular information processing system; it uses enzymes of biological origin, and it doesn’t use the ubiquitous chemical trick of allostery. In this, the binding of one molecule to an enzyme changes the way it processes another molecule, effectively allowing a single molecule to act as a logic gate. But it suggests many fascinating possibilities for the future.

Critical Design

I spent an interesting afternoon last Tuesday in the Royal College of Art spending some time with the students on the Interaction Design course, who are just beginning a project on nanotechnology. This department began life focusing on Computer Related Design, applying the lessons of fine art and graphic design to human centred design for computer interfaces, but it’s recently broadened its scope to a wider consideration of the way people and societies interact with technology. It’s in this context that the students are being asked to visualise possible nanotechnology-based futures.

My host for the visit was the Head of Department, Tony Dunne, the author of (among other works) Hertzian tales and Design Noir. He uses the space between industrial design, conceptual art and social theory to question the relationship between technology and society; on his appointment to the RCA he wrote “Interaction Design can be a test space where designers engage with different technnologies (not just electronics) before they enter the market place, exploring their possible impact on everyday life through design proposals – from a variety of perspectives: commercial, aesthetic, functional, critical, even ethical. I believe we need to educate designers to a higher level than we presently do, if they are to have a significant and meaningful role to play in the 21st Century and not just sit at the margins producing pleasant distractions”

To see why this approach to design might be useful for nanotechnology, take a look at the Nanofactory animation made by John Burch and Eric Drexler to illustrate their vision of the future of nanotechnology. Making no judgements for the moment about its technical feasibility, its worth looking at the symbolism of this vision. What’s striking about it is how amazingly conservative it is. The nano-fabricator itself looks like an upmarket bread-making machine, while the final product is a palm-top computer that could in design terms have come from your local PC World. It’s worth contrasting this vision with the much more radical vision of manufacturing outlined in Drexler’s original book Engines of Creation, which imagined a rocket motor growing, as if from a seed, in a huge tank of milky fluid. I’m sure this retreat to a more conservative, and less challenging, vision, was deliberate, and part of the attempt to defuse the”grey goo” controversy. If we are going to be prepared for what technological change brings us, we are going to need some more challenging visions of future artefacts, and I look forward to seeing the radical concepts that the design students come up with.

Death, life, and amyloids

If you take a solution of a protein, an enzyme, say, and heat it up, it unfolds. The beautifully specific, three dimensional structure, that underlies the workings of the enzyme or molecular machine, melts away, leaving the protein in an open, flexible state. What happens next depends on how concentrated the protein solution is. Remarkably, if the solution is dilute enough that different protein molecules don’t significantly interact, they’ll refold back into their biologically active state. This discovery of reversible refolding won Christian Anfinsen the 1972 Nobel Prize for chemistry; it was these experiments that established that the three dimensional structure of proteins in their functional form is wholly specified by their one-dimensional sequence of amino acids via the remarkable, and still not wholly understood, example of self-assembly that is protein folding. But if the proteins are in a more concentrated solution – the concentration of proteins in egg white or milk whey, for example – then as they cool they don’t fold properly. Instead they interact to make a sticky mess, apparently without biological functionality – you can’t hatch a chick out of a boiled egg.

But over the last fifteen years, it’s become clear that misfolded proteins are of huge biological and medical significance. Previously, the state that many proteins misfold into was believed to be an uninteresting, unstructured mish-mash. But now it’s known that, on the contrary, misfolded proteins often form a generic, highly ordered structure called an amyloid fibril. These are tough, stiff fibres, each about 10 nm wide and up to a few microns in length, in which the protein molecules are stacked together, linked by multiple hydrogen bonds, in extended, crystal-like structures called beta-sheets. The medical significance of these amyloid fibrils is huge; it’s these misfolded proteins that are associated with a number of serious and incurable diseases, like Alzheimers, type II diabetes and Creutzfield-Jacob disease. The physical significance is that there’s an increasingly influential school of thought (led by Chris Dobson of Cambridge) that the amyloid state is actually the most stable state of virtually all proteins. If you take this view to the limit, it implies that all organisms would eventually and inevitably succumb to amyloid diseases if they lived long enough.

This sinister side of amyloid fibrils hasn’t stopped people looking for some positive uses for them. Some researchers, like Harvard’s Susan Lindquist, have thought about using them as templates to make nanowires, though in my view they have several disadvantages compared to other potential biological templates like DNA. But biology is full of surprises, and the discovery by a Swedish group a few years ago that a misfolded version of the milk protein alpha-lactalbumin has a potent anti-cancer effect (full article available, without subscription, here) is certainly one of these. They speculate that this conversion takes place inside the stomach of new-born babies, helping protect them against cancer, and these molecules have already undergone successful clinical trials for treatment of skin papillomas. My children are still young enough for me to remember well the consistency of posset (as we in England delicately call regurgitated baby milk) so the idea of this as a clinicallly proven defense against cancer is rather odd.

But even stranger than this is a story in this weeks Economist, implicating amyloids in the ultimate origin of life itself. This reports from a meeting held at the Royal Society last week about the origin of life, and discusses a theory by the Cardiff biologist Trevor Dale. He takes inspiration from Cairns-Smith, the originator of a brilliant but so far unverified theory of the origin of life which suggests that life began by the templated polymerisation of macromolecules on the surfaces of clay platelets. Dale takes this idea, but suggests that the original macromolecule was RNA, and the surface, rather than being a clay platelet, was a protein amyloid fibril. This then naturally gives rise to the idea of co-evolution of nucleic acids and proteins, rather than requiring, as more popular theories do, a separate, later, stage in which an RNA-only form of life recruits proteins. The theory is described in an pre-publication article in the Journal of Theoretical Biology (abstract only without a subscription). I’m not sure I’m entirely convinced, but who can say what other suprises the amyloid state of proteins may yet spring.

More about Nanohype

Having spent 9 hours in aeroplanes yesterday (not to mention another 6 hours hanging about in a snowy Philadelphia airport waiting for a delayed connection) I have at least had a chance to catch up with some reading. This included two nano- books, one of which was David Berube‘s “Nanohype“. The other (which exemplifies the phenomenon of Berube’s title) was “The Dance of Molecules: how nanotechnology is changing our lives“, by Ted Sargent. I’m reviewing Sargent’s book for Nature, so I’ll save my views on it for later.

“Nanohype” isn’t exactly the usual airport book, though. It’s a rather dense, and extremely closely referenced, account of the way nanotechnology moved from being a staple of futurists and science fiction writers to being the new new thing for technophilic politicians and businessmen, and a new object of opposition for environmentalists and anti-globalisers. For those of us fascinated by the minutiae of how the National Nanotechnology Initiative got going, and of the ways the Nanobusiness Alliance influenced public policy in the USA, it’s going to be the essential source.

The book’s title makes Berube’s basic position pretty clear. Almost everyone involved has some ulterior motive for overstating how revolutionary nanotechnology is going to be, how much money it’s going to make, or the scale of the apocalypse it is going to lead to. Scientists need grants, companies need venture capital, campaigning organisations need publicity and the donations that follow. Not everyone is a huckster, but those that remain idealists end up so divorced from reality that they end up attracting Berube’s (no doubt unwelcome) sympathy. Sometimes the search for low motives leads from bracing cynicism to the brink of absurdity, such as his suggestion that anti-globalisation activist Zak Goldsmith’s opposition to genetic modification of food derives from his wife’s business interests in organic food. This seems a little unlikely, given Goldsmith’s reported £300 million inherited fortune. But Berube’s refusal to take things at face value is a refreshing starting point.

The book has a competent and fairly complete overview of those commercial applications ascribed to nanotechnology, but one thing this book is not about is science. I think this is a pity – there’s an interesting story to be told both about the ascendance of the nanotechnology label amongst academic scientists, and of the resistance, suspicion and cynicism that this has bred in some quarters. But this will have to wait for another chronicler; curiously even giants of academic nanoscience, like Rick Smalley and George Whitesides, appear here as antagonists for the Drexler vision rather than for their own considerable achievements.

Of course, this is a book about politics, not science. It’s about the high-level politics around science funding, the politics of the financial markets, the politics of the campaigning organisation. But despite this political theme, it’s curiously light on ideologies. When we are talking about the societal and ethical implications of nanotechnology, we’re talking about competing visions of the future, competing ideologies. It is striking that many of the protagonists in the nanotechnology debates are driven by very strongly held, and sometimes far from mainstream, creeds. There’s the millenarianism of the transhumanists, the characteristically American libertarianism exemplified by blogger and nano-enthusiast Glenn Reynolds, and on the opposition side the strange blend of radical anti-capitalism, green politics and reactionary conservatism that underlies the world-view of Zak Goldsmith (particularly interesting in the UK now that a newly resurgent conservative opposition party has charged Goldsmith with reviewing its environmental policies). I would like to see a much closer analysis of the deeper reasons why nanotechnology seems to be emerging as a focus of these more profound arguments, but perhaps it’s still too early for this.

Another draft nano-taxonomy

It’s clear to most people that the term nanotechnology is almost impossibly broad, and that to be useful it needs to be broken up into subcategories. In the past I’ve distinguished between incremental nanotechnology, evolutionary nanotechnology and radical nanotechnology, on the basis of the degree of discontinuity with existing technologies. I’ve been thinking again about classifications, in the context of the EPSRC review of nanotechnology research in the UK; here one of the things we want to be able to do is to be able to classify the research that’s currently going on. In this way it will be easier to identify gaps and weaknesses. Here’s an attempt at providing such a classification. This is based partly on the classification that EPSRC developed last time it reviewed its nanotechnology portfolio, 5 years ago, and it also takes into account the discussion we had at our first meeting and a resulting draft from the EPSRC program manager, but I’ve re-ordered it in what I think is a logical way and tried to provide generic definitions for the sub-headings. Most pieces of research would, of course, fit into more than one category.

Enabling science and technology
1. Nanofabrication
Methods for making materials, devices and structures with dimensions less than 100 nm.
2. Nanocharacterisation and nanometrology
Novel techniques for characterisation, measurement and process control for dimensions less than 100 nm.
3. Nano-modelling
Theoretical and numerical techniques for predicting and understanding the behaviour of systems and processes with dimensions less than 100 nm.
4. Properties of nanomaterials
Size-dependent properties of materials that are structured on dimensions of 100 nm or below.
Devices, systems and machines
5. Bionanotechnology
The use of nanotechnology to study biological processes at the nanoscale, and the incorporation of nanoscale systems and devices of biological origin in synthetic structures.
6. Nanomedicine
The use of nanotechnology for diagnosing and treating injuries and disease.
7. Functional nanotechnology devices and machines
Nanoscale materials, systems and devices designed to carry out optical, electronic, mechanical and magnetic functions.
8. Extreme and molecular nanotechnology
Functional devices, systems and machines that operate at, and are addressable at, the level of a single molecule, a single atom, or a single electron.
Nanotechnology, the economy, and society
9. Nanomanufacturing
Issues associated with the commercial-scale production of nanomaterials, nanodevices and nanosystems.
10. Nanodesign
The interaction between individuals and society with nanotechnology. The design of products based on nanotechnology that meet human needs.
11. Nanotoxicology and the environment
Distinctive toxicological properties of nanoscaled materials; the behaviour of nanoscaled materials, structures and devices in the environment.

All comments gratefully received!

From the gallery

For no particular reason other than it is a really nice image, here’s a picture from the Sheffield Polymer Physics Group. It’s an AFM image of a thin film of a block copolymer – a molecule with a long section that can crystallise (poly ethylene oxide), attached to a shorter length of a non-crystallisable material (poly vinyl pyridine). What you can see is a crystal growing from a screw dislocation. The steps have a thickness of a single molecule folded up a few times.

AFM image of a block copolymer growing from a screw dislocation

Image width 20 microns. Image by Dr Cvetelin Vaslilev, image post-treatment by Andy Eccleston.