Ken Donaldson on nanoparticle toxicology

I’ve been running in and out of a three day course on nanotechnology intended for chemists working in the chemistry industry (Nanotechnology for Chemists), organised by me and my colleagues at Sheffield on behalf of the Royal Society of Chemistry. Yesterday I swapped from being a lecturer to being a pupil, to hear a lecture about nanoparticle toxicity, given by Ken Donaldson of the University of Edinburgh, the UK’s leading toxicologist specialising in the effects of environmental nanoparticles. This is a brief summary of his lecture as I understood it (all misunderstandings and misapprehensions are my fault, of course).

His lecture began with the disclaimer that most nanotechnology won’t pose a health risk at all; what’s at issue is the single class of free (i.e. not incorporated in a matrix, as happens in a nanocomposite material), manufactured, insoluble nanoparticles. Of the potential portals of entry – the lungs, the gut and the skin – he felt that the main danger was the lungs, so the main potential danger, both for industrial workers and users, was nanoparticles in the air.

It’s been known for a long time that particles cause lung disease; he gave a number of examples (illustrated by gruesome photographs), including coal miner’s lung, cancer and silicosis from quartz particles and asbestos. The latter causes a number of diseases, including mesothelioma, a particularly nasty cancer seen only in people exposed to asbestos, characterised by long latency period and with a uniformly fatal final outcome. So it’s clear that particles do accumulate in the lungs.

In terms of what we know about the effect of nanoparticle exposures, there are four distinct domains. What we know most about are the nanoparticles derived from combustion. We also know a fair amount about bulk manufactured particles, like titanium dioxide, which have been around a long time and which typically contain significant fractions of nanosized particles. Of course, the effects of nanoparticles used in medical contexts have been well studied. The final area is the least studied – the effect of engineered free nanoparticles.

So what can we learn from environmental nanoparticles? The origin of these particles is overwhelmingly from combustion; in the UK only 13% of exposure comes from non-combustion sources, usually the processes of natural atmospheric chemistry. The most important class of nanoparticles by far are those deriving from traffic exhaust, which account for 60% of exposure. These particles have a basic size of tens of nanometers, though they clump with time into micron sized aggregates, which are very easily respirable.

These particles have no problem getting deep within the lungs. Of the 40 nm particles, perhaps 30% can get to the very delicate tissues in the periphery of the lung, where they deposit very efficiently (smaller particles actually are less effective at getting to the lung as they tend to be taken up in the nose). The structures they interact with deep in the lung – the bronchal epithelial cells – are very small and fragile, and the distances separating airways from the blood are very small. Here the particles cause inflammation, which is essentially a defense reaction. We’re familiar with inflammation of the skin, marked by swelling – fluid bathes the region and white blood cells engulf damaged tissue and microbes, leading to pain, heat, redness and loss of function. Of course in the lung one can’t see the inflammation, and there are no pain receptors, so inflammation can be less obvious, though the swelling can easily cut off air flow leading to very disabling and threatening conditions.

It’s believed that there is a generic mechanism for lung inflammation by combustion-derived nanoparticles, despite the wide variety of different kinds of chemistry in these particles. All these have in common the production of free radicals, which leads to oxidative stress, which in turn leads to inflammation. DIfferent types of nanoparticles cause oxidative stress through different mechanisms. Metal nanoparticles – as found in welding fumes – yield one mechanism, surface born organics (as are found in soot), have another, and materials like carbon black, titanium dioxide and polystyrene latex, which are not very intrinsically toxic, operate through some generic surface mechanism. Clearly it is the surface area that is important, so nanoparticles cause more inflammation than the same mass of fine respirable particles, in the 2-3 micron range, composed of the same materials. In passing one can note that diesel fumes are particularly harmful, dealing a triple blow through their combination of surfaces, metals and organics. These pathways to oxidative stress are very well understood now, so this is a well-found paradigm.

Inflammation due to the oxidative stress caused by nanoparticles from pollution then leads to a number of different diseases, including cardiovascular diease, asthma, scarring, cancer and chronic obstructive pulmonary disease). Their involvement in cardiovascular disease is perhaps unexpected, and to understand it we need to understand where the nanoparticles go. We have some rather hypothetical toxicokinetics based on a few experiments using radioactive, insoluble tracer particles. Having entered the nose or lung, a few studies suggest that they can go directly to the brain. The route from the lung to the blood is well understood, and once in the blood there are many possible ultimate destinations. It’s doubtful that nanoparticles could enter the blood directly from the gut or skin. A mechanism for the involvement of nanoparticles in cardiovascular disease is suggested by studies in which healthy Swedish student volunteers rode a bike in an atmosphere of diesel fumes (at levels comparable to highly polluted city streets). This leads to measurable vascular dysfunction throughout the whole body, and a reduction in the ability to dissolve blood clots (similar effects will be observed in smokers, who self-administer nanoparticles). This suggests that pollution nanoparticles could cause cardiovascular disease either through lung inflammation or through the direct effect of bloodborn particles, leading to the worsening of coronary artery disease or increased blood clotting.

A study using radioactive carbon has suggested that nanoparticles can enter the brain directly from the nose, via the olfactory bulb – this is the route into the central nervous system used by the polio virus, and it doesn’t required crossing the blood-brain barrier. Studies of brain tissue in people living in highly polluted cities like Mexico City have shown pathological changes simiilar to those seen in victims of Parkinson’s and Alzheimer’s occurring as a result of the effect of pollution-derived nanoparticles.

The potential comparison between carbon nanotubes and asbestos is worth considering. Very large exposures to asbestos in the past have caused many cases of fatal lung disease. The characteristics of asbestos which cause this disease – and these characteristics are physical, not chemical – are that they are thin, persistent in the body, and long. Carbon nanotubes certainly match the first two requirements, but it is not obvious that they fulfill the third. Asbestos fibres need to be 20 microns long to demonstrate toxic effects; if they are milled to shorter lengths the toxicity goes away. Carbon nanotubes of this length tend to curl up and clump. On the other hand rat experiments on the effect of nanotubes on the lungs show distinctive fibrosing lesions. Donaldson has just written an extensive review article about nanotube toxicity which will be published soon.

From the regulatory point of view there are some difficulties as regulations usually specify exposure limits in terms of mass concentration, while clearly it is surface area that is important. In the USA NIOSH thinking of reducing limits by a factor of 5 for ultrafine TiO2. Fibres, though, are regulated by number density. The difficulties for carbon nanotubes are that they are probably too small to see by standard microscopy, and they curl up, so although they should be classifed as fibres by WHO definitions probably aren’t going to be detected. In terms of workplace protection local exhaust ventilation is much the best, with almost all masks being fairly useless. This applies, for example, to the masks used by some cyclists in polluted cities. They can, however, take comfort from the fact that their exposure to nanoparticles is significantly smaller than the exposure of the people inside the vehicles who are causing the pollution.

My conclusion, then, is if you are worried about inhaling free nanoparticles (and you should be) you should stop travelling by car.

Regulatory concerns about nanotechnology and food

The UK Government’s Food Standards Agency has issued a draft report about the use of nanotechnology in food and the regulatory implications this might have. The report can be downloaded here; the draft report is now open for public consultation and comments are invited by July 14th.

Observers could be forgiven some slight bemusement when it comes to the potential applications of nanotechnology to food, in that, entirely according to one’s definition of nanotechnology, these could encompass either almost everything or almost nothing. As the FSA says on its website: “In its widest sense, nanotechnology and nanomaterials are a natural part of food processing and conventional foods, as the characteristic properties of many foods rely upon nanometre sized components (e.g. nanoemulsions and foams).” To give just one example, the major protein component of milk – casein – is naturally present in the form of clusters of molecules tens of nanometers in size, so most of the processes of the dairy industry involve the manipulation of naturally occurring nanoparticles. On the other hand, in terms of the narrow focus that has developed at the applications end of nanotechnology on engineered nanoparticles, the current impact on food is rather small. In fact, the FSA states categorically in the report: “The Agency is not aware of any examples of manufactured nanoparticles or other nanomaterials being used in food currently sold in the UK.”

In terms of the narrow focus on engineered nanoparticles, it is clear that there is indeed a regulatory gap at the moment. The FSA states that, if a food ingredient were to be used in a new, nanoscale form, then currently there would be no need to pass any new regulatory hurdles. However, the FSA believes that a more general protection would step in as a backstop – ” in such cases, the general safety articles of the EU Food Law Regulation (178/2002) would apply, which require that food placed on the market is not unsafe.” So, how likely is it that this situation, and subsequent problems, might arise? One needs first to look at those permitted food additives that are essentially insoluble in oil or water. These include (in the EU) some inorganic materials that have been used in nanoparticulate form in non-food contexts, including titanium dioxide, silicon dioxide, some clay-based materials, and the metals aluminium, silver and gold. Insoluble organic materials include cellulose, in both powdered and microcrystalline forms. The latter is an interesting case because it provides a precedent for regulations that do specify size limits – the FSA report states that ” The only examples in the food additives area that specifically limits the presence of small particles is the specification for microcrystalline cellulose, where the presence of small particles (< 5 microns) is limited because of uncertainties over their safety. " The FSA seems fairly confident that if necessary similar amendments could quickly be made in the case of other materials. But there remains the problem that currently there isn’t, as far as I can see, a fail-safe method by which the FSA could be alerted to the use of such nanomaterials and any problems they might cause. On the other hand, it’s not obvious to me why one might want to use these sorts of materials in a nanoparticulate form in food. Titanium dioxide, for example, is used essentially as a white pigment, so there wouldn’t be any point using it in a transparent, nanoscale form.

Synthetic biology – the debate heats up

Will it be possible to radically remodel living organisms so that they make products that we want? This is the ambition of one variant of synthetic biology; the idea is to take a simple bacteria, remove all unnecessary functions, and then patch the genetic code for the functions we want. It’s clear that this project is likely to lead to serious ethical issues, and the debate about these issues is beginning in earnest today. At a conference being held in Berkeley today, synthetic biology 2.0, the synthetic biology research community is having discussions on biosecurity & risk, public understanding & perception, ownership, sharing & innovation, and community organization, with the aim of developing a framework for the self-regulation of the field. Meanwhile, a coalition of environmental NGOs, including Greenpeace, Genewatch, Friends of the Earth and ETC, has issued a press release calling on the scientists to abandon this attempt at self-regulation.

Some of the issues to be discussed by the scientists can be seen on this wiki. One very prominent issue is the possibility that malevolent groups could create pathogenic organisms using synthetic DNA, and there is a lot of emphasis on what safeguards can be put in place by the companies that supply synthetic DNA with a specified sequence. This is a very important problem – the idea that it is now possible to create from scratch pathogens like the virus behind the 1918 Spanish flu pandemic frightens many people, me included. But it’s not going to be the only issue to arise, and I think it is very legitimate to wonder whether community self-regulation is sufficient to police such a potentially powerful technology. The fact that much of the work is going on in commercial organisations is a cause for concern. One of the main players in this game is Synthetic Genomics, inc, which was set up by Craig Venter, who already has some form in the matter of not being bound by the consensus of the scientific community.

In terms of the rhetoric surrounding the field, I’d also suggest that the tone adopted in articles like this one, in this weeks New Scientist, Redesigning life: Meet the biohackers (preview, subscription required for full article), is unhelpful and unwise, to say the least.

Nanoscale ball bearings or grit in the works?

It’s all too tempting to imagine that our macroscopic intuitions can be transferred to the nanoscale world, but these analogies can be dangerous and misleading. For an example, take the case of the buckyball bearings. It seems obvious that the almost perfectly spherical C60 molecule, Buckminster fullerene, would be an ideal ball bearing on the nanoscale. This intuition underlies, for example, the design of the “nanocar”, from James Tour’s group in Rice, that recently made headlines. But a recent experimental study of nanoscale friction by Jackie Krim, from North Caroline State University, shows that this intuition may be flawed.

The study, reported in last week’s Physical Review Letters (abstract here, subscription required for full article), directly measured the friction experienced by a thin layer sliding on a surface coated with a layer of buckminster fullerene molecules. Krim was able to directly compare the friction observed when the balls were allowed to rotate, with the situation when the balls were fixed. Surprisingly, the friction was higher for the rotating layers – here the ball-bearing analogy is seductive, but wrong.

In Seville

I’ve been in Seville for a day or so, swapping the Derbyshire drizzle for the Andalucian sun. I was one of the speakers in a meeting about Technology and Society, held in the beautiful surroundings of the Hospital de los Venerables. The meeting was organised by the Spanish writer and broadcaster Eduardo Punset, who also interviewed me for the science program he presents on Spanish TV.

As well as my talk and the TV interview, I also took part in a panel discussion with Alun Anderson, the former editor-in-chief of New Scientist. This took the form of a conversation between him and me, with an audience listening in. I hope they enjoyed it; I certainly did. As one would imagine, Anderson is formidably well- informed about huge swathes of modern science, and very well-connected with the most prominent scientists and writers. Among the topics we discussed were the future of energy generation and transmission, prospects for space elevators and electronic newspapers, Craig Venter’s minimal genome project, and whether we believed the premise of Ray Kurzweil’s most recent book, ‘The Singularity is Near’. Alun announced he would soon be appearing on a platform with a Ray Kurzweil’s live hologram, or thereabouts. However he did stress that this was simply because the corporeal Kurzweil couldn’t get to the venue in person, not because he has prematurely uploaded.

Computing, cellular automata and self-assembly

There’s a clear connection between the phenomenon of self-assembly, by which objects at the nanoscale arrange themselves into complex shapes by virtue of programmed patterns of stickiness, and information. The precisely determined three dimensional shape of a protein is entirely specified by the one-dimensional sequence of amino acids along the chain, and the information that specifies this sequence (and thus the shape of the protein) is stored as a sequence of bases on a piece of DNA. If one is talking about information, it’s natural to think of computing, so its natural to ask whether there is any general relationship between computing processes, thought of at their most abstract, and self-assembly.

The person who has, perhaps, done the most to establish this connection is Erik Winfree, at Caltech. Winfree’s colleague, Paul Rothemund, made headlines earlier this year by making a nanoscale smiley face, but I suspect that less well publicised work the pair of them did a couple of years ago will prove just as significant in the long run. In this work, they executed a physical realisation of a cellular automaton whose elements were tiles of DNA with particular patches of programmed stickiness. The work was reported in PLoS Biology here; see also this commentary by Chengde Mao. A simple one-dimensional cellular automaton consists of a row of cells, each of which can take one of two values. The automaton evolves in discrete steps, with a rule that determines the value of a cell on the next step by reference to the values of the adjacent cells on the previous step (for an introduction, to elementary cellular automata, see here). One interesting thing about cellular automata is that very simple rules can generate complex and interesting patterns. Many of these can be seen in Stephen Wolfram’s book, A New Kind of Science, (available on line here. It’s worth noting that some of the grander claims in this book are controversial, as is the respective allocation of credit between Wolfram and the rest of the world, but it remains an excellent overview of the richness of the subject).

I can see at least two aspects of this work that are significant. The first point follows from the fact that a cellular automaton represents a type of computer. It can be shown that some types of cellular automaton are, in fact, equivalent to universal Turing machines, able in principle to carry out any possible computation. Of course, this feature may well be entirely useless in practise. A more recent paper by this group (abstract here, subscription required for full paper), succeeds in using DNA tiles to carry out some elementary calculations, but highlights the difficulties caused by the significant error rate in the elementary operations. Secondly, this offers, in principle, a very effective way of designing and executing very complicated and rich structures that combine design with, in some cases, aperiodicity. In the physical realisation here, the starting conditions are specified by the sequence of a “seed” strand of DNA, while the rule is embodied in the design of the sticky patches on the tiles, itself specified by the sequence of the DNA from which they are made. Simple modifications of the seed strand sequence and the rule implicit in the tile design could result in a wide and rich design space of resulting “algorithmic crystals”.

a physical realisation of a cellular automaton executed using self-assembling DNA tiles

A physical realisation of a cellular automaton executed using self-assembling DNA tiles. Red crosses indicate propagation errors, which intiatiate or terminate the characteristic Sierpinski triangle patterns. From Rothemund et al, PLOS Biology 2 2041 (2004), copyright the authors, reproduced under a CREATIVE COMMONS ATTRIBUTION LICENSE

Lost comments

I apologise that a number of legitimate comments in recent days have been stopped by my spam filters – I’ve just rescued 6 of these from the moderation queue, where I had previously overlooked them amidst 519 spam comments. If you do make a comment which doesn’t appear (and this is most likely to happen to relatively long messages with lots of external links) you might want to alert me to this with a shorter comment. Anyway, my apologies to Brian Wang, Reza Fathollahzadeh, Moderate Transhumanist, NanoEnthusiast, sa. jafari and Michael Anissimov.

Nanoparticle toxicity: The Royal Society bites back

Last week saw a little bit more bad publicity for the nascent nano industry, in the shape of a news report from the BBC highlighting a call from the Royal Society for industry to disclose the data from its safety testing of free nanoparticles in consumer products. The origin of the report was a press release from the Royal Society, quoting Ann Dowling, the chair of the Royal Society/Royal Academy of Engineering study of nanotechnology.

The pretext for the Royal Society press release was the recent publication of an inventory of consumer products using nanotechnology by the Woodrow Wilson Centre Project on Emerging Nanotechnologies. But this call for disclosure was already one of the recommendations in the Royal Society’s report, and it’s not hard to sense the growing frustration within the Royal Society that, two years on from the publication of that report, we’re not much further forward in implementing many of its recommendations.

Transhumanism and radical nanotechnology

It’s obvious that there’s a close connection between the transhumanist movement and the idea of radical nanotechnology. Transhumanism is a creed which believes that human nature can and should be transcended with the aid of technological change, effectively leading to salvation both for individuals and society. Together with an expectation of the forthcoming singularity, a trust in cryonics (preservation of corpses at very low temperatures to await future revival) and an enthusiasm for radical life extension, the Drexlerian view of nanotechnology forms part of a belief package held by many transhumanists. The two main organisations devoted to promoting the radical view of nanotechnology, the Center for Responsible Nanotechnology and the Foresight Institute, are explicitly listed in a directory of transhumanist organisations from Michael Anissimov, of the Singularity Institute, who has also written a helpful overview of the transhumanist movement in his blog here.

Is this connection any cause for concern? Transhumanism as a movement has a fairly low profile generally, though blogger John Bruce has recently been exploring the movement and some of its supporters from a critical perspective (this link via TNTlog). But a very negative view of this relationship is presented by Joachim Schummer, a German philosopher now working at the University of South Carolina’s centre for nanoScience & Technology Studies: in an article “”Societal and Ethical Implications of Nanotechnology”: Meanings, Interest Groups, and Social Dynamics in the journal Techné.

Schummer, at the outset, insists on the quasi-religious character of transhumanism, characterising its creed as a belief in “futuristic technological change of human nature for the achievement of certain goals, such as freedom from suffering and from bodily and material constraints, immortality, and “super-intelligence.” He summarises its dependence on the Drexler vision of nanotechnology as follows:

“First, they foresee the development of Drexler’s “assemblers” that should manufacture abundant materials and products of any kind to be made available for everybody, so that material needs will disappear. Second, they expect “assemblers” to become programmable tool-making machines that build robots at the nanoscale for various other transhumanist aspirations—a vision that has essentially fuelled the idea of “singularity”. Thus, they thirdly hope for nanorobots that can be injected into the human body to cure diseases and to stop (or reverse) aging, thereby achieving disease-free longevity or even immortality. Fourth on their nanotechnology wish list are nano-robots that can step by step redesign the human body according to their ideas of “posthuman” perfection. Other nano-robots shall, fifth, make “atom-by-atom copies of the brain”, sixth, implement brain-computer-interfaces for “mind uploading”, seventh, build ultra-small and ultra-fast computers for “mindperfection” and “superintelligence”, and, eighth, revive today’s cryonics patients to let them participate in the bright future.”

Because of the central role to be played by nanotechnology in achieving personal and/or societal salvation, Schummer argues that transhumanists have an existential interest in nanotechnology; and are thus likely to much more accepting of the risks that nanotechnology might bring, on the grounds that the rewards are so great. He singles out the writing of Nick Bostrom, Chairman of the World Transhumanist Association, whose views he summarises thus: “In that mixture of radical utilitarianism and apocalyptic admonition, risks are perceived only for humanity as a whole, are either recoverable for humanity or existential for humanity, and only the existential ones really count. The risks of individuals, to their health and lives, are less important because their risks can be outweighed by steps towards transhumanist salvation of humanity.” Schummer comments that it is this “relative disregard for individual human dignity in risk assessments, i.e. the willingness to sacrifice individuals for the sake of global salvation, that makes transhumanism so inhumane.” Not that advanced nanotechnology is without risks; on the contrary, in the wrong hands it has the potential to destroy all intelligent life on earth. But since in the technologically deterministic view of transhumanists the development of nanotechnology is unavoidable, responsible people must rush to develop it first. Thus, “advancing nanotechnology is not only required for Salvation, but also a moral obligation to avoid Armageddon. “

It’s not surprising that transhumanists find it difficult to take an objective view of nanotechnology and the debates that surround it – to them, it is a matter whose importance, quite literally, transcends life and death.