Synthetic biology – summing up the debate so far

The UK’s research council for biological sciences, the BBSRC, has published a nice overview of the potential ethical and social dimensions to the development of synthetic biology. The report – Synthetic biology: social and ethical challenges (737 KB PDF) – is by Andrew Balmer & Paul Martin at the University of Nottingham’s Institute for Science and Society.

The different and contested definitions and visions that people have for synthetic biology are identified at the outset; the authors distinguish between four rather different conceptions of synthetic biology. There’s the Venter approach, consisting of taking a stripped-down organism with a minimal genome, and building desired functions into that. The identification of modular components and the genetic engineering of whole pathways forms a second, but related approach. Both of these visions of synthetic biology still rely on the re-engineering of existing DNA based life; a more ambitious, but much less completely realised, program for synthetic biology, attempts to make wholly artificial cells from non-biological molecules. A fourth strand, which seems less far-reaching in its ambitions, attempts to make novel biomolecules by mimicking the post-transcriptional modification of proteins that is such a source of variety in biology.

What broader issues are likely to arise from this enterprise? The report identifies five areas to worry about. There’s the potential problems and dangers of the uncontrolled release of synthetic organisms into the biosphere; the worry of these techniques being mis-used for the creation of new pathogens for use in bioterrorism, the potential for the creation of monopolies through an unduly restrictive patenting regime, and implications for trade and global justice. Most far-reaching of all, of course, are the philosophical and cultural implications of creating artificial life, with its connotations of transgressing the “natural order”, and the problems of defining the meaning and significance of life itself.

The recommended prescriptions fall into a well-rehearsed pattern – the need for early consideration of governance and regulation, and the desirability of carrying the public along with early public engagement and resistance to the temptation to overhype the potential applications of the technology. As ever, dialogue between scientists and civil society groups, ethicists and social scientists is recommended, a dialogue which, the authors think, will only be credible if there is a real possibility that some lines of research would be abandoned if they were considered too ethically problematical.

Aliens from inner space? The strange story of the “nanobacteria” that probably weren’t.

How small are the smallest living organisms? There seem to be many types of bacteria of 300 nm and upwards in diameter, but to many microbiologists it seems a rule of thumb that if something can get through a 0.2 µm filter (200 nm) it isn’t alive. Thus the discovery of so-called “nanobacteria”, with sizes between 50 nm and 200 nm, in the human blood-stream, and their putative association with a growing number of pathological conditions such as kidney stones and coronary artery disease, has been controversial. Finnish scientist Olavi Kajander, the discoverer of “nanobacteria”, presents the evidence that these objects are a hitherto undiscovered form of bacterial life in a contribution to a 1999 National Academies workshop on the size limits on very small organisms. But two recent papers give strong evidence that “nanobacteria” are simply naturally formed inorganic nanoparticles.

In the first of these papers, Nanobacteria Are Mineralo Fetuin Complexes, in the February 2008 issue of PLoS Pathogens, Didier Raoult, Patricio Renesto and their coworkers from Marseilles report a comprehensive analysis of “nanobacteria” cultured in calf serum. Their results show that “nanobacteria” are nanoparticles, predominantly of the mineral hydroxyapatite, associated with proteins, particularly a serum protein called fetuin. Crucially, though, they failed to find definitive evidence that the “nanobacteria” contained any DNA. In the absence of DNA, these objects cannot be bacteria. Instead, these authors say they are “self-propagating mineral-fetuin complexes that we propose to call “nanons.””

A more recent article, in the April 8 2008 edition of PNAS, Purported nanobacteria in human blood as calcium carbonate nanoparticles (abstract, subscription required for full article), casts further doubt on the nanobacteria hypothesis. These authors, Jan Martel and John Ding-E Young, from Chang Gung University in Taiwan and Rockefeller University, claim to be able to reproduce nanoparticles indistinguishable from “nanobacteria” simply by combining chemicals which precipitate calcium carbonate – chalk – in cell culture medium. Some added human serum is needed in the medium, suggesting that blood proteins are required to produce the characteristic “nanobacteria” morphology rather than a more conventional crystal form.

So, it seems the case is closed… “nanobacteria” are nothing more than naturally occurring, inorganic nanoparticles, in which the precipitation and growth of simple inorganic compounds such as calcium carbonate is modified by the adsorption of biomolecules at the growing surfaces to give particles with the appearance of very small single celled organisms. These natural nanoparticles may or may not have relevance to some human diseases. This conclusion does leave a more general question in my mind, though. It’s clear that the presence of nucleic acids is a powerful way of detecting hitherto unknown microorganisms, and the absence of nucleic acids here is powerful evidence that these nanoparticles are not in fact bacteria. But it’s possible to imagine a system that is alive, at least by some definitions, that has a system of replication that does not depend on DNA at all. Graham Cairns-Smith’s book Seven Clues to the Origin to Life offers some thought provoking possibilities for systems of this kind as precursors to life on earth, and exobiologists have contemplated the possibility of non-DNA based life on other planets. If some kind of primitive life without DNA, perhaps based on some kind of organic/inorganic hybrid system akin to Cairns-Smith’s proposal, did exist on earth today, we would be quite hard-pressed to detect it. I make no claim that these “nanobacteria” represent such a system, but the long controversy over their true nature does make it clear that deciding whether a system is being living or abiotic in the absence of evidence from nucleic acids could be quite difficult.

Watching an assembler at work

The only software-controlled molecular assembler we know about is the ribosome – the biological machine that reads the sequence of bases on a strand of messenger RNA, and, converting this genetic code into a sequence of amino acids, synthesises the protein molecule that corresponds to the gene whose information was transferred by the RNA. An article in this week’s Nature (abstract, subscription required for full paper, see also this editor’s summary) describes a remarkable experimental study of the way the RNA molecule is pulled through the ribosome as each step of its code is read and executed. This experimental tour-de-force of single molecule biophysics, whose first author is Jin-Der Wen, comes from the groups of Ignacio Tinoco and Carlos Bustamante at Berkeley.

The experiment starts by tethering a strand of RNA between two micron-size polystyrene beads. One bead is held firm on a micropipette, while the other bead is held in an optical trap – the point at which a highly focused laser beam has its maximum intensity. The central part of the RNA molecule is twisted into a single hairpin, and the ribosome binds to the RNA just to one side of this hairpin. As the ribosome reads the RNA molecule, it pulls the hairpin apart, and the resulting lengthening of the RNA strand is directly measured from the change in position of the anchoring bead in its optical trap. What’s seen is a series of steps – the ribosome moves about 2.7 nm in about a tenth of a second, then pauses for a couple of seconds before making another step.

This distance corresponds exactly to the size of the triplet of bases that represent a single character of the genetic code – the codon. What we are seeing, then, is the ribosome pausing on a codon to read it, before pulling the tape through to read the next character. What we don’t see in this experiment, though we know it’s happening, is the addition of a single amino acid to the growing protein chain during this read step. This takes place by means of the binding to RNA codon, within the ribosome, of a shorter strand of RNA – the transfer RNA – to which the amino acid is attached. What the experiment does make clear that the operation of this machine is by no means mechanical and regular. The times taken for the ribosome to move from the reading position for one codon to the next – the translocation times – are fairly tightly distributed around an average value of around 0.08 seconds, but the dwell times on each codon vary from a fraction of a second up to a few seconds. Occasionally the ribosome stops entirely for a few minutes.

This experiment is far from the final word on the way ribosomes operate. I can imagine, for example, that people are going to be making strenuous efforts to attach a probe directly to the ribosome, rather than, as was done here, inferring its motion from the location of the end of the RNA strand. But it’s fascinating to have such a direct probe of one of the most central operations of biology. And for those attempting the very ambitious task of creating a synthetic analogue of a ribosome, these insights will be invaluable.

The right size for nanomedicine

One reason nanotechnology and medicine potentially make a good marriage is that the size of nano-objects is very much on the same length scale as the basic operations of cell biology; nanomedicine, therefore, has the potential to make direct interventions on living systems at the sub-cellular level. A paper in the current issue of Nature Nanotechnology (abstract, subscription required for full article) gives a very specific example, showing that the size of a drug-nanoparticle assembly directly affects how effective the drug works in controlling cell growth and death in tumour cells.

In this work, the authors bound a drug molecule to a nanoparticle, and looked at the way the size of the nanoparticle affected the interaction of the drug with receptors on the surface of target cells. The drug was herceptin, a protein molecule which binds to a receptor molecule called ErbB2 on the surface of cells from human breast cancer. Cancerous cells have too many of these receptors, and this affects the communications between different cells which tell cells whether to grow, or which marks cells for apoptosis – programmed cell death. What the authors found was that herceptin attached to gold nanoparticles was more effective than free herceptin at binding to the receptors; this then led to reduced growth rates for the treated tumour cells. But how well the effect works depends strongly on how big the nanoparticles are – best results are found for nanoparticles 40 or 50 nm in size, with 100 nm nanoparticles being barely more effective than the free drug.

What the authors think is going on is connected to the process of endocytosis, by which nanoscale particles can be engulfed by the cell membrane. Very small nanoparticles typically only have one herceptin molecule attached, so they behave much like free drug – one nanoparticle binds to one receptor. 50 nm nanoparticles have a number of herceptin molecules attached, so a single nanoparticle links together a number of receptors, and the entire complex, nanoparticles and receptors, is engulfed by the cell and taken out of the cell signalling process completely. 100 nm nanoparticles are too big to be engulfed, so only that fraction of the attached drug molecules in contact with the membrane can bind to receptors. A commentary (subscription required) by Mauro Ferrari sets this achievement in context, pointing out that a nanodrug needs to do four things: successfully navigate through the bloodstream, negotiate any biological barriers preventing it from getting it where it needs to go, locate the cell that is its target, and then to modify the pathological cellular processes that underly the disease being treated. We already know that nano-particle size is hugely important for the first three of these requirements, but this work directly connects size to the sub-cellular processes that are the target of nanomedicine.

Drew Endy on Engineering Biology

Martyn Amos draws our attention to a revealing interview from MIT’s Drew Endy about the future of synthetic biology. While Craig Venter up to now monopolised the headlines about synthetic biology, Endy has an original and thought-provoking take on the subject.

Endy is quite clear about his goals: “The underlying goal of synthetic biology is to make biology easy to engineer.” In pursuing this, he looks to the history of engineering, recognising the importance of things like interchangeable parts and standard screw gauges, and seeks a similar library of modular components for biological systems. Of course, this approach must take for granted that when components are put together they behave in predictable ways: “Engineers hate complexity. I hate emergent properties. I like simplicity. I don’t want the plane I take tomorrow to have some emergent property while it’s flying.” Quite right, of course, but since many suspect that life itself is an emergent property one could wonder how much of biology will be left after you’ve taken the emergence out.

Many people will have misgivings about the synthetic biology enterprise, but Endy is an eloquent proponent of the benefits of applying hacker culture to biology: “Programming DNA is more cool, it’s more appealing, it’s more powerful than silicon. You have an actual living, reproducing machine; it’s nanotechnology that works. It’s not some Drexlarian (Eric Drexler) fantasy. And we get to program it. And it’s actually a pretty cheap technology. You don’t need a FAB Lab like you need for silicon wafers. You grow some stuff up in sugar water with a little bit of nutrients. My read on the world is that there is tremendous pressure that’s just started to be revealed around what heretofore has been extraordinarily limited access to biotechnology.”

His answer to societal worries about the technology, then, is an confidence in the power of open source ideals, common ownership rather than corporate monopoly for the intellectual property, and an assurance that an open technology will automatically be applied to solve pressing societal problems.

There are legitimate questions about this vision of synthetic biology, both as to whether it is possible and whether it is wise. But to get some impression of the strength of the driving forces pushing this way, take a look at this recent summary of trends in DNA synthesis and sequencing. “Productivity of DNA synthesis technologies has increased approximately 7,000-fold over the past 15 years, doubling every 14 months. Costs of gene synthesis per bases pair have fallen 50-fold, halving every 32 months.” Whether this leads to synthetic biology in the form anticipated by Drew Endy, the breakthrough into the mainstream of DNA nanotechnology, or something quite unexpected, it’s difficult to imagine this rapid technological development not having far-reaching consequences.

Grand challenges for UK nanotechnology

The UK’s Engineering and Physical Sciences Research Council introduced a new strategy for nanotechnology last year, and some of the new measures proposed are beginning to come into effect (including, of course, my own appointment as the Senior Strategic Advisor for Nanotechnology). Just before Christmas the Science Minister announced the funding allocations for research for the next few years. Nanotechnology is one of six priority programmes that cut across all the Research Councils (to be precise, the cross-council programme has the imposing title: Nanoscience through Engineering to Application).

One strand of the strategy involves the funding of large scale integrated research programmes in areas where nanotechnology can contribute to issues of pressing societal or economic need. The first of these Grand Challenges – in the area of using nanotechnology to enable cheap, efficient and scalable ways to harvest solar energy – was launched last summer. An announcement on which proposals will be funded will be made within the next few months.

The second grand challenge will be launched next summer, and it will be in the general area of nanotechnology for healthcare. This is a very broad theme, of course – I discussed some of the potential areas, which include devices for delivering drugs and for rapid diagnosis, in an earlier post. To narrow the area down, there’s going to be an extensive process of consultation with researchers and people in the relevant industries – for details, see the EPSRC website. There’ll also be a role for public engagement; EPSRC is commissioning a citizens’ jury to consider the options and have an input into the decision of what area to focus on.

Delivering genes

Gene therapy holds out the promise of correcting a number of diseases whose origin lies in the deficiency of a particular gene – given our growing knowledge of the human genome, and our ability to synthesise arbitrary sequences of DNA, one might think that the introduction of new genetic material into cells to remedy the effects of abnormal genes would be straightforward. This isn’t so. DNA is a relatively delicate molecule, and organisms have evolved efficient mechanisms for finding and eliminating foreign DNA. Viruses, on the other hand, whose entire modus operandi is to introduce foreign nucleic acids into cells, have evolved effective ways of packaging their payloads of DNA or RNA into cells. One approach to gene therapy co-opts viruses to deliver the new genetic material, though this sometimes has unpredicted and undesirable side-effects. So an effective, non-viral method of wrapping up DNA, introducing it into target cells and releasing it would be very desirable. My colleagues at Sheffield University, led by Beppe Battaglia, have recently demonstrated an effective and elegant way of introducing DNA into cells, in work recently reported in the journal Advanced Materials (subscription required for full paper).

The technique is based on the use of polymersomes, which I’ve described here before. Polymersomes are bags formed when detergent-like polymer molecules self-assemble to form a membrane which folds round on itself to form a closed surface. They are analogous to the cell membranes of biology, which are formed from soap-like molecules called phospholipids, and the liposomes that can be made in the laboratory from the same materials. Liposomes are used to wrap up and deliver molecules in some commercial applications already, including some drug delivery systems and in some expensive cosmetics. They’ve also been used in the laboratory to deliver DNA into cells, though they aren’t ideal for this purpose, as they aren’t very robust. Polymersomes allow one a great deal more flexibility in designing polymersomes with the properties one needs, and this flexibility is exploited to the full in Battaglia’s experiments.

To make a polymersome, one needs a block copolymer – a polymer with two or three chemically distinct sections joined together. One of these blocks needs to be hydrophobic, and one needs to be hydrophilic. The block copolymers used here, developed and synthesised in the group of Sheffield chemist Steve Armes, have two very nice features. The hydrophilic section is composed of poly(2-(methacryloyloxy)ethyl phosphorylcholine) – this is a synthetic polymer that presents the same chemistry to the adjoining solution as a naturally occurring phospholipid in a cell membrane. This means that polymersomes made from this material are able to circulate undetected within the body for longer than other water soluble polymers. The hydrophobic block is poly(2-(diisopropylamino)ethyl methacrylate). This is a weak base, so it has the property that its state of ionisation depends on the acidity of the solution. In a basic solution, it is un-ionized, and in this state it is strongly hydrophobic, while in an acidic solution it becomes charged, and in this state it is much more soluble in water. This means that polymersomes made from this material will be stable in neutral or basic conditions, but will fall apart in acid. Conversely, if one has the polymers in an acidic solution, together with the DNA one wants to deliver, and then neutralises the solution, polymersomes will spontaneously form, encapsulating the DNA.

The way these polymersomes work to introduce DNA into cells is sketched in the diagram below. On encountering a cell, the polymersome triggers the process of endocytosis, whereby the cell engulfs the polymersome in a little piece of cell membrane that is pinched off inside the cell. It turns out that the solution inside these endosomes is significantly more acidic than the surroundings, and this triggers the polymersome to fall apart, releasing its DNA. This, in turn, generates an osmotic pressure sufficient to burst open the endosome, releasing the DNA into the cell interior, where it is free to make its way to the nucleus.

The test of the theory is to see whether one can introduce a section of DNA into a cell and then demonstrate how effectively the corresponding gene is expressed. The DNA used in these experiments was the gene that codes for a protein that fluoresces – the famous green fluorescent protein, GFP, originally obtained from certain jelly-fish – making it easy to detect whether the protein coded for by the introduced gene has actually been made. In experiments using cultured human skin cells, the fraction of cells in which the new gene was introduced was very high, while few toxic effects were observed, in contrast to a control experiment using an existing, commercially available gene delivery system, which was both less effective at introducing genes and actually killed a significant fraction of the cells.

Polymersome endocytosis
A switchable polymersome as a vehicle for gene delivery. Beppe Battaglia, University of Sheffield.

Venter in the Guardian

The front page of yesterday’s edition of the UK newspaper the Guardian was, unusually, dominated by a science story: I am creating artificial life, declares US gene pioneer. The occasion for the headline was an interview with Craig Venter, who fed them a pre-announcement that they had successfully managed to transplant a wholly synthetic genome into a stripped down bacterium, replacing its natural genetic code by an artificial one. In the newspaper’s somewhat breathless words: “The Guardian can reveal that a team of 20 top scientists assembled by Mr Venter, led by the Nobel laureate Hamilton Smith, has already constructed a synthetic chromosome, a feat of virtuoso bio-engineering never previously achieved. Using lab-made chemicals, they have painstakingly stitched together a chromosome that is 381 genes long and contains 580,000 base pairs of genetic code.”

We’ll see what, in detail, has been achieved when the work is properly published. It’s significant, though, that this story was felt to be important enough to occupy most of the front page of a major UK newspaper at a time of some local political drama. Craig Venter is visiting the UK later this month, so we can expect the current mood of excitement or foreboding around synthetic biology to continue for a while yet.

Towards the $1000 human genome

It currently costs about a million dollars to sequence an individual human genome. One can expect incremental improvements in current technology to drop this price to around $100,000, but the need that current methods have to amplify the DNA will make it difficult for this price to drop further. So, to meet a widely publicised target of a $1000 genome a fundamentally different technology is needed. One very promising approach uses the idea of threading a single DNA molecule through a nanopore in a membrane, and identifiying each base by changes in the ion current flowing through the pore. I wrote about this a couple of years ago, and a talk I heard yesterday from one of the leaders in the field prompts me to give an update.

The original idea for this came from David Deamer and Dan Branton, who filed a patent for the general scheme in 1998. Hagan Bayley, from Oxford, whose talk I heard yesterday, has been collaborating with Reza Ghadiri from Scripps, to implement this scheme using a naturally occuring pore forming protein, alpha-hemolysin, as the reader.

The key issues are the need to get resolution at a single base level, and the correct identification of the bases. They get extra selectivity by a combination of modification of the pore by genetic engineering, and insertion into the pore of small ring molecules – cyclodextrins. At the moment speed of reading is a problem – when the molecules are pulled through by an electric field they tend to go a little too fast. But, in an alternative scheme in which bases are chopped off the chain one by one and dropped into the pore sequentially, they are able to identify individual bases reliably.

Given that the human genome has about 6 million bases, they estimate that at 1 millisecond reading time per base they’ll need to use 1000 pores in parallel to sequence a genome in under a day (taking into account the need for a certain amount of redundancy for error correction). To prepare the way for commercialisation of this technology, they have a start-up company – Oxford NanoLabs – which is working on making a miniaturised and rugged device, about the size of a palm-top computer, to do this kind of analysis.

Stochastic sensor
Schematic of a DNA reader using the pore forming protein alpha-hemolysin. As the molecule is pulled through the pore, the ionic conduction through the pore varies, giving a readout of the sequence of bases. From the website of the Theoretical and Computational Biophysics group at the University of Illinois at Urbana-Champaign.

Three good reasons to do nanotechnology: 2. For healthcare and medical applications

Part 1 of this series of posts dealt with applications of nanotechnology for sustainable energy. Here I go on to describe why so many people are excited about the possibilities for applying nanotechnology in medicine and healthcare.

It should be no surprise that medical applications of nanotechnology are very prominent in many people’s research agenda. Despite near universal agreement about the desirablility of more medical research, though, there are some tensions in the different visions people have of future nanomedicine. To the general public the driving force is often the very personal experience most people have of illness in themselves or people close to them, and there’s a lot of public support for more work aimed at the well known killers of western world, such as cardiovascular disease, cancer, and degenerative diseases like Alzheimer’s and Parkinson’s. Economic factors, though, are important for those responsible for supplying healthcare, whether that’s the government or a private sector insurer. Maybe it’s a slight exaggeration to say that the policy makers’ ideal would be for people to live in perfect health until they were 85 and then tidily drop dead, but it’s certainly true that the prospect of an ageing population demanding more and more expensive nursing care is one that is exercising policy-makers in a number of prosperous countries. In the developing world, there are many essentially political and economic issues which stand in the way of people being able to enjoy the levels of health we take for granted in Europe and the USA, and matters like the universal provision of clean water are very important. Important though the politics of public health is, the diseases that blight developing world, such as AIDS, tuberculosis and malaria, still present major science challenges. Finally, back in the richest countries of the world, there’s a climate of higher expectations of medicine, where people look to medicine to do more than to fix obvious physical ailments, and to move into the realm of human enhancement and prolonging of life beyond what might formerly be regarded as a “natural” lifespan.

So how can nanotechnology help? There are three broad areas.

1. Therapeutic applications of nanotechnology. An important area of focus for medical applications of nanotechnology has been in the area of drug delivery. This begins from the observation that when a patient takes a conventionally delivered drug, an overwhelmingly large proportion of the administered drug molecules don’t end up acting on the biological systems that they are designed to affect. This is a serious problem if the drug has side effects; the larger the dose that has to be administered to be sure that some of the molecule actually gets to the place where it is needed, the worse these side-effects will be. This is particularly obvious, and harrowing, for the intrinsically toxic molecules the drugs used for cancer chemotherapy. Another important driving force for improving delivery mechanisms is the fact that, rather than the simple and relatively robust small molecules that have been the main active ingredients in drugs to date, we are turning increasingly to biological molecules like proteins (such as monoclonal antibodies) and nucleic acids (for example, DNA for gene therapy and small interfering RNAs). These allow very specific interventions into biological processes, but the molecules are delicate, and are easily recognised and destroyed in the body. To deliver a drug, current approaches include attaching it to a large water soluble polymer molecule which is essentially invisible to the body, or wrapping it up in a self-assembled nanoscale bag – a liposome – formed from soap like molecules like phospholipids or block copolymers. Attaching the drug to a dendrimer – a nanoscale treelike structure which may have a cavity in its centre – is conceptually midway between these two approaches. The current examples of drug delivery devices that have made it into clinical use are fairly crude, but future generations of drug delivery vehicles can be expected to include “stealth” coatings to make them less visible to the body, mechanisms for targeting them to their destination tissue or organ and mechanisms for releasing their payload when they get there. They may also incorporate systems for reporting their progress back to the outside world, even if this is only the passive device of containing some agent that shows up strongly in a medical scanner.

Another area of therapeutics in which nanotechnology can make an impact is in tissue engineering and regenerative medicine. Here it’s not so much a question of making artificial substitutes for tissues or organs; ideally it would be in providing the environment in which a patient’s own cells would develop in such a way as to generate new tissue. This is a question of persuading those cells to differentiate to take up the specialised form of a particular organ. Our cells are social organisms, which respond to chemical and physical signals as they develop and differentiate to produce tissues and organs, and the role of nanotechnology here is to provide an environment (or scaffold) which gives the cells the right physical and chemical signals. Once again, self-assembly is one way forward here, providing soft gels which can be tagged with the right chemical signals to persuade the cells to do the right thing.

2. Diagnostics. Many disease states manifest themselves by the presence of specific molecules, so the ability to detect and identify these molecules quickly and reliably, even when they are present at very low concentrations, would be very helpful for the rapid diagnosis of many different conditions. The relevance of nanotechnology is that many of the most sensitive ways of detecting molecules rely on interactions between the molecule and a specially prepared surface; the much greater importance of the surface relative to the bulk for nanostructured materials makes it possible to make sensors of great sensitivity. Sensors for the levels of relatively simple chemicals, such as glucose or thyroxine, could be integrated with devices that release the chemicals needed to rectify any imbalances (these integrated devices go by the dreadful neologism of “theranostics”); recognising pathogens by recognising stretches of DNA would give a powerful way of identifying infectious diseases without the necessity for time-consuming and expensive culturing steps. One obvious and much pursued goal would be to find a way of reading, at a single molecule level, a whole DNA sequence, making it possible cheaply to obtain an individual’s whole genome.

3. Innovation and biomedical research. A contrarian point of view, which I’ve heard frequently and forcibly expressed by a senior figure from the UK’s pharmaceutical industry, is that the emphasis in nanomedicine on drug delivery is misguided, because fundamentally what it represents is an attempt to rescue bad drug candidates. In this view the place to apply nanotechnology is the drug discovery process itself. It’s a cause for concern for the industry that it seems to be getting harder and more expensive to find new drug candidates, and the hopes that were pinned a few years ago on the use of large scale combinatorial methods don’t seem to be working out. In this view, there should be a move away from these brute force approaches to more rational methods, but this time informed by the very detailed insights into cell biology offered by the single molecule methods of bionanotechnology.