Nanotechnology moves up the UK news agenda again

I arrived at my office after my afternoon lecture today to find a note saying a film crew was arriving in 30 minutes; sure enough my colleague, Tony Ryan, and I spent a couple of hours filming interviews amid the bubbling flasks of the chemistry department talking about what nanotechnology is, is not, and might become. This will be boiled down to about a minute and a half on Yorkshire Television’s early evening news magazine. Such is the lot of a would-be science populariser.

The reason for this timing is a bit of pre-positioning that’s going on by the media in the UK at the moment. We’re expecting some significant nanotechnology related news on Friday, so people are getting their stories ready.

Quotations for the week

This week’s quotation on Soft Machines comes from that pioneer of British empiricism, Sir Francis Bacon:

It cannot be that axioms established by argumentation can suffice for the discovery of new works, since the subtlety of nature is greater many times than the subtlety of argument.

I write this with Philip Moriarty in mind, since he’s going to be taking a break from participating in debates on Soft Machines and elsewhere. I would like to record my gratitude to Philip, because he’s made a tremendous contribution to this blog in the last couple of months. I think he’s made a really important contribution to the debate, not least by forcibly reminding us how subtle and complex surface physics can be. As another oft-quoted saying goes (usually attributed to Wolfgang Pauli):

God made solids, but surfaces were the work of the devil.

Bits and Atoms

I recently made a post – Making and doing – about the importance of moving the focus of radical nanotechnology away from the question of how artefacts are to be made, and towards a deeper consideration of how they will function. I concluded with the provocative slogan Matter is not digital. My provocation has been rewarded with detailed attempts to rebut my argument from both Chris Peterson, VP of the Foresight Institute, on Nanodot, and Chris Phoenix of the Center for Responsible Nanotechnology, on the CRNano blog. Here’s my response to some of the issues they raise.

First of all, on the basic importance of manufacturing:

Chris Peterson: Yes, but as has been repeatedly pointed out, we need better systems that make things in order to build better systems that do things. Manufacturing may be a boring word compared to energy, information, and medicine, but it is fundamental to all.

Manufacturing will always be important; things need to be made. My point is that by becoming so enamoured with one particular manufacturing technique, we run the risk of choosing materials to suit the manufacturing process rather than the function that we want our artefact to accomplish. To take a present-day example, injection moulding is a great manufacturing method. It’s fast, cheap, can make very complex parts with high dimensional fidelity. Of course it only works with thermoplastics; sometimes this is fine but everytime you eat with a plastic knife you expose yourself to the results of sub-optimal materials choice forced on you by the needs of a manufacturing process. Will MNT similarly limit the materials choices that you can make? I believe so.

Chris Peterson: But isn’t it the case that we already have ways to represent 3D molecular structures in code, including atom types and bonds?

Certainly we can represent structures virtually in code; the issue is whether we can output that code to form physical matter. For this we need some basic, low level machine code procedures from which complex algorthms can be built up. Such a procedure would look something like: depassivate point A on a surface. Pick up building block from resevoir B. Move it to point A. Carry out mechanosynthesis step to bond it to point A. Repassivate if necessary. Much of the debate between Chris Phoenix and Philip Moriarty concerned the constraints that surface physics put on the sorts of procedures you might use. In particular, note the importance of the idea of surface reconstructions. The absence of such reconstructions is one of the main reasons why hydrogen passivated diamond is by far the best candidate for a working material for mechanosynthesis. This begins to answer Chris Peterson’s next question…

Chris Peterson: How did we get into the position of needing to use only one material here?

…which is further answered by Chris Phoenix’s explanation of why matter can be treated with digital design principles, which focuses on the non-linear nature of covalent bonding:

Chris Phoenix: Forces between atoms as they bond are also nonlinear. As you push them together, they “snap” into position. That allows maintenance of mechanical precision: it’s not hard, in theory, for a molecular manufacturing system to make a product fully as precise as itself. So covalent bonds between atoms are analogous to transistors. Individual bonds correspond to the ones and zeros level.

So it looks like we’re having to restrict ourselves to covalently bonded solids. Goodbye to metals, ionic solids, molecular solids, macromolecular solids… it looks like we’re now stuck with choosing among the group 4 elements, the classical compound semiconductors and other compounds of elements in groups 3-6. Of these, diamond seems the best choice. But are we stuck with a single material? Chris Phoenix thinks not…

Chris Phoenix: By distinguishing between the nonlinear, precision-preserving level (transistors and bonding) and the level of programmable operations (assembly language and mechanosynthetic operations), it should be clear that the digital approach to mechanosynthesis is not a limitation, and in particular does not limit us to one material. But for convenience, an efficient system will probably produce only a few materials.

This analogy is flawed. In a microprocessor, all the transistors are the same. In a material, the bonds are not the same. This is obviously true if the material contains more than one atom, and even if the material only has one type of atom the bonds won’t be the same if the working surface has any non-trivial topography – hence the importance of steps and edges in surface chemistry. If the bonds don’t behave in the same way, a mechanosynthetic step which works with one bond won’t work with another, and your simple assembly language becomes a rapidly proliferating babel of different operations all of which need to be individually optimised.

Chris Phoenix: For nanoscale operations like binding arbitrary molecules, it remains to be seen how difficult it will be to achieve near-universal competence.

I completely agree with this. A classic target for advanced nanomedicine would be to have a surface which resisted non-specific binding of macromolecules, but recognised one specific molecular target and produced a response on binding. I find it difficult to see how you would do this with a covalently bonded solid.

Chris Phoenix: But most products that we use today do not involve engineered nanoscale operations.

This seems an extraordinary retreat. Nanotechnology isn’t going to make an impact by allowing us to reproduce the products we have today at lower cost; it’s going to need to allow us to make products with a functionality that is now unattainable. These products – and I’m thinking particularly of applications to nanomedicine and to information and communication technologies – will necessarily involve engineered nanoscale operations.

Chris Phoenix: For example, a parameterized nanoscale truss design could produce structures which on larger scales had a vast range of strength, elasticity, and energy dissipation. A nanoscale digital switch could be used to build any circuit, and when combined with an actuator and a power source, could emulate a wide range of deformable structures.

Yes, I agree with this in principle. But we’re coming back to mechanical properties – structural materials, not functional ones. The structural materials we generally use now – wood, steel, brick and concrete – have long since been surpassed by other materials with much superior properties, but we still go on using them. Why? They’re good enough, and the price is right. New structural materials aren’t going to change the world.

Chris Phoenix: A few designs for photon handling, sensing (much of which can be implemented with mechanics), and so on should be enough to build almost any reasonable macro-scale product we can design.

Well, I’m not sure I can share this breezy confidence. How is sensing going to be implemented by mechanics? We’ve already conceded that the molecular recognition events that the most sensitive nanoscale sensing operations depend on are going to be difficult or impossible to implement in covalently bonded systems. Designing band-structures – which we need to do to control light/matter interactions – isn’t an issue of ordinary mechanics, but of many-body quantum theory.

The idea of being able to manipulate atoms in the same way as we manipulate bits is seductive, but ultimately it’s going to prove very limiting. To get the most out of nanotechnology, we’ll need to embrace the complexities of real condensed matter, both hard and soft.

Anyone seen my plutonium?

The news that the UK nuclear reprocessing plant at Sellafield has ‘lost’ 29.6 kg of plutonium has been accompanied by much emphasis that this doesn’t mean that the stuff has physically gone missing. It’s simply an accounting shortfall, we are reassured, and a leader in the Times on the subject is notable for being probably the most scientifically literate editorial I’ve seen in a major newspaper for some time. Nonetheless, there is a real issue here, though it’s not related to fears of nuclear terrorism. The British Nuclear Group spokesperson is reported as saying “There is no suggestion that any material has left the site. When you have got a complicated chemical procedure, quite often material remains in the plant.” In other words, in all the complex and messy operations that are involved in nuclear reprocessing, some of the plutonium is not recovered, and remains in dilute solution in waste solvent. And in that form it’s potentially another small addition to the vast tanks of radioactive soup that form such a noxious legacy of the cold war nuclear programs in the UK, USA and the former Soviet Union.

Can nanotechnology help? The idea of a fleet of nanoscale submarines making their way through the sludge pools, picking out the radioactive isotopes and concentrating them into small volumes of high level waste which could then be safely managed, is an attractive one. Even more attractive is the idea that you could pay for the whole operation by recovering the highly valuable precious metals whose presence in nuclear waste is so tantalising. Is this notion ridiculously far-fetched? I’m not so sure that it is.

A very interesting technology that gives us a flavour of what is possible has been developed at Pacific Northwest National Laboratory. Nanoporous materials, with a very high specific surface area, are made using self-assembled surfactant nanostructures as templates. This huge internal surface area is then coated with a layer of molecules a single molecule thick; functional groups on the end of each of these molecules are designed to selectively bind a heavy metal ion. Such SAMMS – self-assembled monolayers on mesoporous supports – have been designed to selectively bind toxic heavy metals, like lead and mercury, precious metals like gold and platinum, and radioactive actinides like neptunium and plutonium, and they seem to work very effectively. Applications in areas like environmental clean-up and mining are obvious, in addition to applications to nuclear processing and clean-up.

Who do we think we are?

I’m grateful for this glowing endorsement from TNTlog, and I’m impressed that it takes as few as two scientist bloggers to make a trend. But I’m embarrassed that Howard Lovy’s response seems to have taken the implied criticism so personally. I’ve always enjoyed reading NanoBot. I don’t always agree with Howard’s take on various issues, but he’s always got interesting things to say and his insistence on the importance of appreciating the interaction between nanotechnology and wider culture is spot-on.

But I think Howard’s pained sarcasm – “Scientists, go write about yourselves, and we in the public will read with wide-eyed wonder about the amazing work you’re doing and thank you for lowering yourselves to speak what you consider to be our language” – misses the mark. There are many ways in which scientists can contribute to this debate besides this crude and demeaning de haut en bas caricature, and many of them reflect real deficiencies in the ways in which mainstream journalists cover science.

To many journalists, science is marked by breakthroughs, which are conveniently announced by press releases from publicity hungry university or corporate press offices, or from the highly effective news offices of the scientific glamour magazines, Nature and Science. But scientists never read press releases, and they very rarely write them, because the culture of science doesn’t marry at all well with the event-driven mode of working of journalism. Very rarely, real breakthroughs really are made, though often their significance isn’t recognised at the time. But the usual pattern is of incremental advances, continuous progress and a mixture of cooperation and competition between labs across the world working in the same area. If scientists can write about science as it really is practised, with all its debates and uncertainties, unfiltered by press offices, that seems to me to be entirely positive. It’s also less likely, rather than more likely, to lead to the glorification and self-aggrandisement of scientists that Howard seems to think is our aim.

Artificial life and biomimetic nanotechnology

Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.

Making and doing

Eric Drexler is quoted in Adam Keiper’s report from the NRC nanotechnology workshop in DC as saying:

“What’s on my wish list: … A clear endorsement of the idea that molecular machine systems that make things … with atomic precision is a natural and important goal for the development of nanoscale technologies … with the focus of that endorsement being the recognition that we can look at biology, and beyond…. It would be good to have more minds, more critical thought, more innovation, applied in those directions.”

I almost completely agree with this, particularly the bit about looking at biology and beyond. Why only almost?. Because “systems that make things” should only be a small part of the story. We need systems that do things – we need to process energy, process information, and, in the vital area of nanomedicine, interact with the cells that make up humans and their molecular components. This makes a big difference to the materials we choose to work with. Leaving aside, for the moment, the question of whether Drexler’s vision of diamondoid-based nanotechnology can be make to work at all, let’s ask the question, why diamond? It’s easy to see why you would want to use diamond for structural applications, as it is strong and stiff. But its bandgap is too big for optoelectronic applications (like solar cells) and its use in medicine will be limited by the fact that it probably isn’t that biocompatible.

In the very interesting audio clip that Adam Keiper posts on Howard Lovy’s Nanobot, Drexler goes on to compare the potential of universal, general purpose manufacture with that of general purpose computing. Who would have thought, he asks (I paraphrase from memory here), that we could have one machine that we can use to do spreadsheets, play our music and watch movies on? Who indeed? … but this technology depends on the fact that documents, music and moving pictures can all be represented by 1’s and 0’s. For the idea of general purpose manufacturing to be convincing, one would need to believe that there was an analogous way in which all material things could be represented by a simple low level code. I think this leads to an insoluble dilemma – the need to find simple low level operations drives one to use a minimum number – preferably one – basic mechanosynthesis step. But in limiting ourselves in this way, we make life very difficult for ourselves in trying to achieve the broad range of functions and actions that we are going to want these artefacts for. Material properties are multidimensional, and it’s difficult to believe that one material can meet all our needs.

Matter is not digital.

The irresistible rise of nano-gizmology

What would happen if nanotechnology suddenly went out of fashion in the academic world, all the big nano-funding initiatives dried up, and putting the nano word in grant applications doomed them to certain failure? Would all the scientists who currently label themselves nanoscientists just go back to being chemists and physicists as before? This interesting question was posed to me on Monday during a fascinating afternoon seminar and discussion with social scientists from the Institute for Environment, Philosophy and Public Policy at Lancaster University.

My first reaction was to say that nothing would change. Scientists can be a cynical bunch when it comes to funding, and it’s tempting to assume that they would just relabel their work yet again to conform with whatever the new fashion was, and carry on just as before. But on reflection my answer was that the rise of nanoscience and nanotechnology as a label in academic science has been accompanied by two real and lasting cultural changes. The first is so well-rehearsed that it’s a cliché to say it, but it is nonetheless true – nanoscientists really have got used to interdisciplinary working in a way that was very rare in academia twenty years ago (of course, it has always been the rule in industry). The second change is less obvious, though I think I first noticed it as a marked change six or seven years ago. This was a shift in emphasis away from testing theories and characterising materials towards making widgets or gizmos – things that, although usually still far away from being a real, viable product, did something or produced some functional effect. More than any use of the label “nano”, this seems to me to be a lasting change in the way scientists judge the value of their own and other peoples’s work; it’s certainly very much reflected in the editorial policies of the glamour journals like Nature and Science. Some will mourn the eclipse of the values of pure science values, while others will anticipate a more direct economic return on our societies’s investments in science as a result, but it remains to be seen what the overall outcome of this shift will be.

Nanotech at Hewlett-Packard

There’s a nice piece in Slate by Paul Boutin reporting on his trip round the Hewlett-Packard labs in Palo Alto. The opening stresses the evolutionary, short term, character of the research going on there, stressing that these projects only get funded if they are going to make a fast return for the company, usually within five years. The first projects he mentions are about RFID (radio frequency identification), and these are discussed in terms of Walmart, supply chains and keeping track of your pallets. I can relate to this because my wife used to be a production planner. She used to wake up in the night worrying about whether there were enough plastic overcaps in the warehouse to pack the next week’s production, but she knew that the only way to find out for sure, despite all their smart SAP systems, was to walk down to the warehouse and look. But despite these mundane immediate applications it’s the technologies that are going to underlie RFID that also have such uncomfortable implications for a universal surveillance society.

The article moves on to talk about HP’s widely reported recent development of crossbar latches as a key component for molecular electronic logic circuits (see for example this BBC report, complete with a good commentary from Soft Machines’s frequent visitor, Philip Moriarty). The author rightly highlights the need to develop new, defect tolerant computer architectures if these developments in molecular electronics are to be converted into useful products. This nicely illustrates the point I made below, that in nanotechnology you may well need to develop systems architectures that accommodate the physical realities of the nanoscale, rather than designing the architecture first and hoping that you’ll be able to find low-level operations that will suit your preconceived notions .

The mechanosynthesis debate

Now that the 130 or so people who have downloaded the Moriarty/Phoenix debate about mechanosynthesis so far have had a chance to digest it, here are some thoughts that occur to me on re-reading it. This is certainly not a summary of the debate; rather these are a just a few of the many issues that emerge that I think are important.

On definitions. A lot of the debate revolves around questions of how one actually defines mechanosynthesis. There’s an important general point here – one can have both narrow definitions and broad definitions, and there’s a clear role for both. For example, I have been very concerned in everything I write to distinguish between the broad concept of radical nanotechnology, and the specific realisation of a radical nanotechnology that is proposed by Drexler in Nanosystems. But we need to be careful not to imagine that a finding that supports a broad concept necessarily also validates the narrower definition. So, to use an example that I think is very important, the existence of cell biology is compelling evidence that a radical nanotechnology is possible, but it doesn’t provide any evidence that the Drexlerian version of the vision is workable. Philip’s insistence on a precise definition of mechanosynthesis in distinction from the wider class of single molecule manipulation experiments stems from his strongly held view that the latter don’t yet provide enough evidence for the former. Chris, on the other hand, is in favour of broader definitions, on the grounds that if the narrowly defined approach doesn’t work, then one can try something else. This is fair enough if one is prepared to be led to wherever the experiments take you, but I don’t think it’s consistent with having a very closely defined goal like the Nanosystems vision of diamondoid based MNT. If you let the science dictate where you go (and I don’t think you have any choice but to do this), your path will probably take you somewhere interesting and useful, but it’s probably not going to be the destination you set out towards.

On the need for low-level detail. The debate makes very clear the distinction between the high-level systems approach exemplified by Nanosystems and by the Phoenix nanofactory paper, and the need to work out the details at the “machine language” level. “Black-boxing” the low-level complications can only take you so far; at some point one needs to work out what the elementary “machine language” operations are going to be, or even whether they are possible at all. Moreover, the nature of these elementary operations can’t always be divorced from the higher level architecture. A good example comes from the operation of genetics, where the details of the interaction between DNA, RNA and proteins means that the distinction between hardware and software that we are used to can’t be sustained.

On the role of background knowledge from nanoscience. A widely held view in the MNT community is that very little research has been done in pursuit of the Drexlerian project since the publication of Nanosystems. This is certainly true in the sense that science funding bodies haven’t supported an overtly Drexlerian research project; but it neglects the huge amount of work in nanoscience that has a direct bearing, in detail, on the proposals in Nanosystems and related work. This varies from the centrally relevant work done by groups (including the Nottingham group, and a number of other groups around the world) which are actively developing the manipulation of single molecules by scanning probe techniques, to the important background knowledge accumulated by very many groups round the world in areas such as surface and cluster physics and chemical vapour deposition. This (predominantly experimental) work has greatly clarified how the world at the nanoscale works, and it should go without saying that theoretical proposals that aren’t consistent with the understanding gained in this enterprise aren’t worth pursuing. Commentators from the MNT community are scornful, with some justification, of nanoscientists who make pronouncements about the viability of the Drexlerian vision of nanotechnology without having acquainted themselves with the relevant literature, for example by reading Nanosystems. But this obligation to read the literature goes both ways.

I think the debate has moved us further forward. I think it is clear that the Freitas proposal that sparked the discussion off does have serious problems that will probably prevent its implementation in its original form. But the fact that a proposal concrete enough to sustain this detailed level of criticism has been presented is itself immensely valuable and positive, and it will be interesting to see what emerges when the proposal is refined and further scrutinised. It is also clear that, whatever the ultimate viability of this mechanosynthetic route to full MNT turns out to be (and I see no grounds to revise my sceptical position), there’s a lot of serious science to be done, and claims of a very short timeline to MNT are simply not credible.