How much can artificial intelligence and machine learning accelerate polymer science?

I’ve been at the annual High Polymer Research Group meeting at Pott Shrigley this week; this year it had the very timely theme “Polymers in the age of data”. Some great talks have really brought home to me both the promise of machine learning and laboratory automation in polymer science, as well as some of the practical barriers. Given the general interest in accelerated materials discovery using artificial intelligence, it’s interesting to focus on this specific class of materials to get a sense of the promise – and the pitfalls – of these techniques.

Debra Audis, from the USA’s National Institute of Standards and Technology, started the meeting off with a great talk on how to use machine learning to make predictions of polymer properties given information about molecular structure. She described three difficulties for machine learning – availability of enough reliable data, the problem of extrapolation outside the parameter space of the training set, and the problem of explainability.

A striking feature of Debra’s talk for me was its exploration of the interaction between old-fashioned theory, and new-fangled machine learning (ML). This goes in two directions – on the one hand, Debra demonstrated that incorporating knowledge from theory can greatly speed up the training of a ML model, as well as improving its ability to extrapolate beyond the training set. But given a trained ML model – essentially a black box of weights for your neural network, Debra emphasised the value of symbolic regression to convert the black box to a closed form expression of simple functional forms of the kind a theorist would hope to be able to derive from some physical principles, providing something a scientist might recognise as an explanation of the regularities that the machine learning model encapsulates.

But any machine learning model needs data – lots of data – so where does that data come from? One answer is to look at the records of experiments done in the past – the huge corpus of experimental data contained within the scientific literature. Jacqui Cole from Cambridge has developed software to extract numerical data, chemical reaction schemes, and to analyse images from the scientific data. For specific classes of (non-polymeric) materials she’s been able to create data sets with thousands of entries, using automated natural language processing to extract some of the contextual information that makes the data useful. Jacqui conceded that polymeric materials are particularly challenging for this approach; they have complex properties that are difficult to pin down to a single number, and what to the outsider may seem to be a single material (polyethylene for example) may actually be a category that encompasses molecules with a wider variety of subtle variations arising from different synthesis methods and reaction conditions. And Debra and Jacqui shared some sighs of exasperation at the horribly inconsistent naming conventions used by polymer science researchers.

My suspicion on this (informed a little by the outcomes of a large scale collaboration with a multinational materials company that I’ve been part of over the last five years) is that the limitations of existing data sets mean that the full potential of machine learning will only be unlocked by the production of new, large scale datasets designed specifically for the problem in hand. For most functional materials the parameter space to be explored is vast and multidimensional, so considerable thought needs to be given to how best to sample this parameter space to provide the training data that a good machine learning model needs. In some circumstances theory can help here – Kim Jelfs from Imperial described an approach where the outputs from very sophisticated, compute intensive theoretical models were used to train a ML model that could then interpolate properties at much lower compute cost. But we will always need to connect to the physical world and make some stuff.

This means we will need automated chemical synthesis – the ability to synthesise many different materials with systematic variation of the reactants and reaction conditions, and then rapidly determine the properties of this library of materials. How do you automate a synthetic chemistry lab? Currently, a synthesis laboratory consists of a human measuring out materials, setting up the right reaction conditions, then analysing and purifying the products, finally determining their properties. There’s a fundamental choice here – you can automate the glassware, or automate the researcher. In the UK, Lee Cronin at Glasgow (not at the meeting) has been a pioneer of the former approach, while Andy Cooper at Liverpool has championed the latter. Andy’s approach involves using commercial industrial robots to carry out the tasks a human researcher would do, while using minimally adapted synthesis and analytical equipment. His argument in favour of this approach is essentially an economic one – the world market for general purpose industrial robots is huge, leading to substantial falls in price, while custom built automated chemistry labs represent a smaller market, so one should expect slower progress and higher prices.

Some aspects of automating the equipment are already commercially available. Automatic liquid handling systems are widely available, allowing one, for example to pipette reactants into multiwell plates, so if one’s synthesis isn’t sensitive to air one can use this approach to do combinatorial chemistry. Adam Gormley from Rutgers described this approach for making a library of copolymers by an oxygen-tolerant adaptation of reversible addition−fragmentation chain-transfer polymerisation (RAFT), to produce libraries of copolymers with varying polymer molecular weight and composition. Another approach uses flow chemistry, in which reactions take place not in a fixed piece of glassware, but as the solvents containing the reactants travel down pipes, as described by Tanja Junkers from Monash, and Nick Warren from Leeds. This approach allows in-line reaction monitoring, so it’s possible to build in a feedback loop, adjusting the ingredients and reaction conditions on the fly in response to what is being produced.

It seems to me, as a non-chemist, that there is still a lot of specific work to be done to adapt the automation approach to any particular synthetic method, so we are still some way from a universal synthesis machine. Andy Cooper’s talk title perhaps alluded to this: “The mobile robotic polymer chemist: nice, but does it do RAFT?” This may be a chemist’s joke.

But whatever approach one has realised to be able to produce a library of molecules with different characteristics, and analyse their properties, there remains the question of how to sample what is likely to be a huge parameter space in order to provide the most effective training set for machine learning. We were reminded by the odd heckle from a very distinguished industrial scientist in the audience that there is a very classical body of theory to underpin this kind of experimental strategy – the Design of Experiments methodology. In these approaches, one selects the optimum set of different parameters in order most effectively to span parameter space.

But an automated laboratory offers the possibility of adapting the sampling strategy in response to the results as one gets them. Kim Jelfs set out the possible approaches very clearly. You can take the brute force approach, and just calculate everything – but this is usually prohibitively expensive in compute. You can use an evolutionary algorithm, using mutation and crossover steps to find a way through parameter space that optimises the output. Bayesian optimisation is popular, and generative models can be useful for taking a few more random leaps. Whatever the details, there needs to be a balance between optimisation and exploration – between taking a good formulation and making it better, and searching widely across parameter space for a possibly unexpected set of conditions that provides a step-change in the properties one is looking for.

It’s this combination of automated chemical synthesis and analysis, with algorithms for directing a search through parameter space, that some people call a “self-driving lab”. I think the progress we’re seeing now suggests that this isn’t an unrealistic aspiration. My somewhat tentative conclusions from all this:

  • We’re still a long way from an automated lab that can flexibly handle many different types of chemistry, so for a while its going to be a question of designing specific set-ups for particular synthetic problems (though of course there will be a lot of transferrable learning).
  • There is still lot of craft in designing algorithms to search parameter space effectively.
  • Theory still has its uses, both in accelerating the training of machine learning models, and in providing satisfactory explanations of their output.
  • It’s going to take significant effort, computing resource and money to develop these methods further, so it’s going to be important to select use cases where the value of an optimised molecule makes the investment worthwhile. Amongst the applications discussed in the meeting were drug excipients, membranes for gas separation, fuel cells and batteries, optoelectronic polymers.
  • Finally, the physical world matters – there’s value in the existing scientific literature, but it’s not going to be enough just to process words and text; for artificial intelligence to fulfil its promise for accelerating materials discovery you need to make stuff and test its properties.

Novavax – another nanoparticle Covid vaccine

The results for the phase III trial of the Novavax Covid vaccine are now out, and the news seems very good – an overall efficacy of about 90% in the UK trial, with complete protection against severe disease and death. The prospects now look very promising for regulatory approval. What’s striking about this is that we now have a third, completely different class of vaccine that has demonstrated efficacy against COVID-19. We have the mRNA vaccines from BioNTech/Pfizer and Moderna, the viral vector vaccine from Oxford/AstraZeneca, and now Novavax, which is described as “recombinant nanoparticle technology”. As I’ve discussed before (in Nanomedicine comes of age with mRNA vaccines), the Moderna and BioNTech/Pfizer vaccines both crucially depend on a rather sophisticated nanoparticle system that wraps up the mRNA and delivers it to the cell. The Novavax vaccine depends on nanoparticles, too, but it turns out that these are rather different in their character and function to those in the mRNA vaccines – and, to be fair, are somewhat less precisely engineered. So what are these “recombinant nanoparticles”?

All three of these vaccine classes – mRNA, viral vector and Novavax – are based around raising an immune response to a particular protein on the surface of the coronovirus – the so-called “spike” protein, which binds to receptors on the surface of target cells at the start of the process through which the virus makes its entrance. The mRNA vaccines and the viral vector vaccines both hijack the mechanisms of our own cells to get them to produce analogues of these spike proteins in situ. The Novavax vaccine is less subtle – the protein itself is used as the vaccine active ingredient. It’s synthesised in bioreactors by using a genetically engineered insect virus, which is used to infect a culture of cells from a moth caterpillar. The infected cells are harvested and the spike proteins collected and formulated. It’s this stage that, in the UK, will be carried out in the Teeside factory of the contract manufacturer Fujifilm Diosynth Biotechnologies.

The protein used in the vaccine is a slightly tweaked version of the molecule in the coronavirus. The optimal alteration was found by Novavax’s team, led by scientist Nita Patel, who quickly tried out 20 different versions before hitting on the variety that is most stable and immunologically active. The protein has two complications compared to the simplest molecules studied by structural biologists – it’s a glycoprotein, which means that it has short polysaccharide chains attached at various points along the molecule, and it’s a membrane protein (this means that it’s structure has to be determined by cryo-transmission electron microscopy, rather than X-ray diffraction). It has a hydrophobic stalk, which sticks into the middle of the lipid membrane which coats the coronavirus, and an active part, the “spike”, attached to this, sticking out into the water around the virus. For the protein to work as a vaccine, it has to have exactly the same shape as the spike protein has when it’s on the surface of the virus. Moreover, that shape changes when the virus approaches the cell it is going to infect – so for best results the protein in the vaccine needs to look like the spike protein at the moment when it’s armed and ready to invade the cell.

This is where the nanoparticle comes in. The spike protein is formulated with a soap-like molecule called Polysorbate 80 (aka Tween 80). This consists of a hydrocarbon tail – essentially the tail group of oleic acid – attached to a sugar like molecule – sorbitan – to which are attached short chains of ethylene oxide. The whole thing is what’s known as a non-ionic surfactant. It’s like soap, in that it has a hydrophobic tail group and a hydrophilic head group. But unlike soap or comment synthetic detergents, the head group is, although water soluble, uncharged. The net result is that in water Polysorbates-80 self-assembles into nanoscale droplets – micelles – in which the hydrophobic tails are buried in the core and the hydrophilic head groups cover the surface, interacting with the surrounding water. The shape and size of the micelles is set by the length of the tail group and the area of the head group, so for these molecules the optimum shape is a sphere, probably a few tens of nanometers in diameter.

As far as the spike proteins are concerned, these somewhat squishy nanoparticles look a bit like the membrane of the virus, in that they have an oily core that the stalks can be buried in. When the protein, having been harvested from the insect cells and purified, is mixed up with a polysorbate-80 solution, they end up stuck into the sphere like a bunch of whole cloves stuck into a mandarin orange. Typically each nanoparticle will have about 14 spikes. It has to be said that, in contrast to the nanoparticles carrying the mRNA in the BioNTech and Moderna vaccines, neither the component materials nor the process for making the nanoparticles is particularly specialised. Polysorbate-80 is a very widely used, and very cheap, chemical, extensively used as an emulsifier in convenience food and an ingredient in cosmetics, as well as in many other pharmaceutical formulations, and the formation of the nanoparticles probably happens spontaneously on mixing (though I’m sure there are some proprietary twists and tricks to get it to work properly, there usually are).

But the recombinant protein nanoparticles aren’t the only nanoparticles of importance in the Novavax vaccine. It turns out that simply injecting a protein as an antigen doesn’t usually provoke a strong enough immune response to work as a good vaccine. In addition, one needs to use one of the slightly mysterious substances called “adjuvants” – chemicals that, through mechanisms that are probably still not completely understood, prime the body’s immune system and provoke it to make a stronger response. The Novavax vaccine uses as an adjuvant another nanoparticle – a complex of cholesterol and phospholipid (major components of our own cell membranes, widely available commercially) together with molecules called saponins, which are derived from the Chilean soap-bark tree.

Similar systems have been used in other vaccines, both for animal diseases (notably foot and mouth) and human. The Novavax adjuvant technology was developed by a Swedish company, Isconova AB, which was bought by Novavax in 2013, and consists of two separate fractions of Quillaja saponins, separately formulated into 40 nm nanoparticles and mixed together. The Chilean soap-bark tree is commercially cultivated – the raw extract is used, for example, in the making of the traditional US soft drink, root beer – but production will need to be stepped up (and possibly redirected from fizzy drinks to vaccines) if these vaccines turn out to be as successful as it now seems they might.

Sources: This feature article on Novavax in Science is very informative, but I believe the cartoon depicting the nanoparticle isn’t likely to be accurate, depicting it as cylindrical when it is much more likely to be spherical, and based on double tailed lipids rather than the single tailed anionic surfactant that is in fact used in the formulation. This is the most detailed scientific article from the Novavax scientists describing the vaccine and its characterisation. The detailed nanostructure of the vaccine protein in its formulation is described in this recent Science article. The “Matrix-M” adjuvant is described here, while the story of the Chilean soap-bark tree and its products is described in this very nice article in The Atlantic Magazine.

Nanomedicine comes of age with mRNA vaccines

There have been few scientific announcements that have made as big an impact as the recent news that a vaccine, developed in a collaboration between German biotech company BioNTech and the pharmaceutical giant Pfizer, has been shown to effective against covid-19. What’s even more striking is that this vaccine is based on an entirely new technology. It’s an mRNA vaccine; rather than injecting weakened or dead virus materials, it harnesses our own cells to make the antigens that prime our immune system to fight future infections, exactly where those antigens are needed. This is a brilliantly simple idea with many advantages over existing technologies that rely on virus material – but like most brilliant ideas, it takes lots of effort to make it actually work.

Here I want to discuss just one aspect of these new vaccines – how the mRNA molecule is delivered to the cells where we want it to go, and then caused to enter those cells, where it does its job of making the virus proteins that cause the chain of events leading to immunity. This relies on packaging the mRNA molecules inside nanoscale delivery devices. These packages protect the mRNA from the body’s defense mechanisms, carry it undamaged into the interior of a target cell through the cell’s protective membrane, and then open up to release the bare mRNA molecules to do their job. This isn’t the first application of this kind of nanomedicine in the clinic – but if the vaccine lives up to expectations, it will make unquestionably the biggest impact. In this sense, it marks the coming of age of nanomedicine.

Other mRNA vaccines are in the pipeline too. One being developed by the US company Moderna with National Institute of Allergy and Infectious Diseases (part of the US Government’s NIH), is also in phase 3 clinical trials, and it seems likely that we’ll see an announcement about that soon too. Another, from the German Biotech company CureVac, is one step behind, in phase 2 trials. All of these use the same basic idea, delivering mRNA which encodes a protein antigen. A couple of other potential mRNA vaccines use a twist on this simple idea; a candidate from Arcturus with the Duke-National University of Singapore, and another from Imperial College, use “self-amplifying RNA” – RNA which doesn’t just encode the desired antigen, but which also carries the instructions for some machinery to make more of itself. The advantage of this in principle is that it requires less RNA to produce the same amount of antigen.

But all of these candidates have overcome the same obstacle – how to get the RNA into the human cells where it is needed? The problem is that, even before the RNA reaches ones of its target cells, the human body is very effective at identifying any stray bits of RNA it finds wandering around and destroying them. All of the RNA vaccine candidates use more or less the same solution, which is to wrap up the vulnerable RNA molecule in a nanoscale shell made of the same sort of lipid molecules that form the cell membrane.

The details of this technology are complex, though. I believe the BioNTech, CureVac and Imperial vaccines all use the same delivery technology, working in partnership with a Canadian biotech company Acuitas Therapeutics. The Moderna vaccine delivery technology comes from that towering figure of nanomedicine, MIT’s Robert Langer. The details in each case are undoubtedly proprietary, but from the literature it seems that both approaches use the same ingredients.

The basic membrane components are a phospholipid analogous to that found in cell membranes (DSPC – distearoylphosphatidylcholine), together with cholesterol, which makes the bilayer more stable and less permeable. Added to that is a lipid to which is attached a short chain of the water-soluble polymer PEO. This provides the nanoparticle with a hairy coat, which probably helps the nanoparticle avoid some of the body’s defences by repelling the approach of any macromolecules (artificial vesicles thus decorated are sometimes known as “stealth liposomes”), and perhaps also controls the shape and size of the nanoparticles. Finally, perhaps the crucial ingredient is another lipid, with a tertiary amine head group – an ionisable lipid. This is what the chemists call a weak base – like ammonia, it can accept a proton to become positively charged (a cation). Crucially, its charge state depends on the acidity or alkalinity of its environment.

To make the nanoparticles, these four components are dissolved in ethanol, while the RNA is dissolved in a mildly acidic solution in water. Then the two solutions are mixed together, and out of that mixture, by the marvel of self-assembly, the nanoparticles appear, with the RNA safely packaged up inside them. Of course, it’s more complicated than that simple statement makes it seem, and I’m sure there’s a huge amount of knowledge that goes into creating the right conditions to get the particles you need. But in essence, what I think is going on is something like this.

When the ionisable lipid sees the acidic environment, it becomes positively charged – and, since the RNA molecule is negatively charged, the ionisable lipid and the RNA start to associate. Meanwhile, the other lipids will be self-organising into sheets two molecules thick, with the hydrophilic head groups on the outside and the oily tails in the middle. These sheets will roll up into little spheres, at the same time incorporating the ionisable lipids with their associated mRNA, to produce the final nanoparticles, with the RNA encapsulated inside them.

When the nanoparticles are injected into the patient’s body, their hairy coating, from the PEO grafted lipids, will give them some protection against the body’s defences. When they come into contact with the membrane of a cell, the ionisable lipid is once again crucial. Some of the natural lipids that make up the membrane coating the cell are negatively charged – so when they see the positively charged head-group of the ionisable lipids in the nanoparticles, they will bind to them. This has the effect of disrupting the membrane, creating a gap to allow the nanoparticle in.

This is a delicate business – cationic surfactants like CTAB use a similar mechanism to disrupt cell membranes, but they do that so effectively that they kill the cell – that’s why we can make disinfectants out of them. The cationic lipid in the nanoparticle must have been chosen so that it disrupts the membrane enough to let the nanoparticle in, but not so much as to destroy it. Once inside the cell, the conditions must be different enough that the nanoparticle, which is only held together by relatively weak forces, breaks open to release its RNA payload.

It’s taken a huge amount of work – over more than a decade – to devise and perfect a system that produces nanoparticles, that successfully envelopes the RNA payload, that can survive in the body long enough to reach a cell, that can deliver its payload through the cell membrane and then release it. What motivated this work wasn’t the idea of making an RNA vaccine.

One of the earliest clinical applications of this kind of technology was for the drug Onpattro, produced by the US biotech company Alnylam. This uses a different RNA based technology – so called small interfering RNA (siRNA) – to silence a malfunctioning gene in liver cells, to control the rare disease transthyretin amyloidosis. More recently, research has been driven by the field of cancer immunotherapy – this is the area for which the Founder/CEO of BioNTech, Uğur Şahin, received substantial funding from the European Research Council. Even for quite translational medical research, the path from concept to clinical application can take unexpected turns!

We all have to hope that the BioNTech/Pfizer vaccine lives up to its promise, and that at least some of the other vaccine candidates – both RNA based and more conventional – are similarly successful; it will undoubtedly be good to have a choice, as each vaccine will undoubtedly have relative strengths and weaknesses. The big question now must be how quickly production can be scaled up to the billions of doses needed to address a world pandemic.

One advantage of the mRNA vaccines is that the vaccine can be made in a chemical process, rather than having to culture viruses in a cell culture, making scale up faster. Of course there will be potential bottlenecks. These can be as simple as the vials needed to store the vaccine, or the facilities needed to transport and store them – especially acute for the BioNTech/Pfizer, which needs to be stored at -80° C.

There are also some quite specialised chemicals involved. I don’t know what will be needed for scaling up RNA synthesis; for the lipids to make the nanoparticles, I believe that the Alabama-based firm Avanti Polar Lipids has the leading position. This company was recently bought, in what looks like a very well-timed acquisition, by the Yorkshire based speciality chemicals company Croda, which I am sure has the capacity to scale up production effectively. Students of industrial history might appreciate that Croda was originally founded to refine Yorkshire wool grease into lanolin, so their involvement in this most modern application of nanotechnology, which nonetheless rests on fat-like molecules of biological origin, seems quite appropriate.

References.

The paper describing the BioNTech/Pfizer vaccine is: Phase I/II study of COVID-19 RNA vaccine BNT162b1 in adults.

The key reference this paper gives for the mRNA delivery nanoparticles is: Expression kinetics of nucleoside-modified mRNA delivered in lipid nanoparticles to mice by various routes.

The process of optimising the lipids for such delivery vehicles is described here: Rational design of cationic lipids for siRNA delivery.

A paper from the Robert Langer group describes the (very similar) kind of delivery technology that I presume underlies the Moderna vaccine: Optimization of Lipid Nanoparticle Formulations for mRNA Delivery in Vivo with Fractional Factorial and Definitive Screening Designs

Bad Innovation: learning from the Theranos debacle

Earlier this month, Elizabeth Holmes, founder of the medical diagnostics company Theranos, was indicted on fraud and conspiracy charges. Just 4 years ago, Theranos was valued at $9 billion, and Holmes was being celebrated as one of Silicon Valley’s most significant innovators, not only the founder of one of the mythical Unicorns, but through the public value of her technology, a benefactor of humanity. How this astonishing story unfolded is the subject of a tremendous book by the journalist who first exposed the scandal, John Carreyrou. “Bad Blood” is a compelling read – but it’s also a cautionary tale, with some broader lessons about the shortcomings of Silicon Valley’s approach to innovation.

The story of Theranos

The story begins in 2003. Holmes had finished her first year as a chemical engineering student at Stanford. She was particularly influenced by one of her professors, Channing Robertson; she took his seminar on drug delivery devices, and worked in his lab in the summer. Inspired by this, she was determined to apply the principles of micro- and nano- technology to medical diagnostics, and wrote a patent application for a patch which would sample a patient’s blood, analyse it, use the information to determine the appropriate response, and release a controlled amount of the right drug. This closed loop system would combine diagnostics with therapy – hence Theranos, (from “theranostic”).

Holmes dropped out from Stanford in her second year to pursue her idea, encouraged by her professor, Channing Robertson. By the end of 2004, the company she had incorporated, with one of Robertson’s PhD students, Shaunak Roy, had raised $6 million from angels and venture capitalists.

The nascent company soon decided that the original theranostic patch idea was too ambitious, and focused on diagnostics. Holmes focused on the idea of doing blood tests on very small volumes – the droplets of blood you get from a finger prick, rather than the larger volumes you get by drawing blood with a needle and syringe. It’s a great pitch for those scared of needles – but the true promise of the technology was much wider than this. Automatic units could be placed in patients’ homes, cutting out all the delay and inconvenience of having to go to the clinic for the blood draw, and then waiting for the results to come back. The units could be deployed in field situations – with the US Army in Iraq and Afghanistan – or in places suffering from epidemics, like ebola or zika. They could be used in drug trials to continuously monitor patient reactions and pick up side-effects quickly.

The potential seemed huge, and so were the revenue projections. By 2010, Holmes was ready to start rolling out the technology. She negotiated a major partnership with the pharmacy chain Walgreens, and the supermarket Safeway had loaned the company $30 million with a view to opening a chain of “wellness centres”, built around the Theranos technology, in its stores. The US Army – in the powerful figure of General James Mattis – was seriously interested.

In 2013, the Walgreen collaboration was ready to go live; the company had paid Theranos a $100 million “innovation fee” and a $40 million loan on the basis of a 2013 launch. The elite advertising agency Chiat\Day, famous for their work with Apple, were engaged to polish the image of the company – and of Elizabeth Holmes. Investors piled in to a new funding round, at the end of which Theranos was valued at $9 billion – and Holmes was a paper billionaire.

What could go wrong? There turned out to be two flies in the ointment. Firstly, Theranos’s technology couldn’t do even half of what Holmes had been promising, and even on the tests it could do, it was unacceptably inaccurate. Carreyrou’s book is at its most compelling as he gives his own account of how he broke the story, in the face of deception, threats, and some very expensive lawyers. None of this would have come out without some very brave whistleblowers.

At what point did the necessary optimism about a yet-to-be developed technology turn first into self-delusion, and then into fraud? To answer this, we need to look at the technological side of the story.

The technology

As is clear from Carreyrou’s account, Theranos had always taken secrecy about its technology to the point of paranoia – and it was this secrecy that enabled the deception to continue for so long. There was certainly no question that they would be publishing anything about their methods and results in the open literature. But, from the insiders’ accounts in the book, we can trace the evolution of Theranos’s technical approach.

To go back to the beginning, we can get a sense of what was in Holmes’s mind at the outset from her first patent, originally filed in 2003. This patent – “Medical device for analyte monitoring and drug delivery” is hugely broad, at times reading like a digest of everything that anybody at the time was thinking about when it comes to nanotechnology and diagnostics. But one can see the central claim – an array of silicon microneedles would penetrate the skin to extract the blood painlessly, this would be pumped through 100 µm wide microfluidic channels, combined with reagent solutions, and then tested for a variety of analytes through detecting their binding to molecules attached to surfaces. In Holmes’s original patent, the idea was that this information would be processed, and then used to initiate the injection of a drug back into the body. One example quoted was the antibiotic vancomycin, which has rather a narrow window of effectiveness before side effects become severe – the idea would be that the blood was continuously monitored for vancomycin levels, which would then be automatically topped up when necessary.

Holmes and Roy, having decided that the complete closed loop theranostic device was too ambitious, began work to develop a microfluidic device to take a very small sample of blood from a finger prick, route it through a network of tiny pipes, and subject it to a battery of scaled-down biochemical tests. This all seems doable in principle, but fraught with practical difficulties. After three years making some progress, Holmes seems to have decided that this approach wasn’t going to work in time, so in 2007 the company switched direction away from microfluidics, and Shaunak Roy parted from it amicably.

The new approach was based around a commercial robot they’d acquired, designed for the automatic dispensing of adhesives. The idea of basing their diagnostic technology on this “gluebot” is less odd than it might seem. There’s nothing wrong with borrowing bits of technology from other areas, and reliably glueing things together depends on precise, automated fluid handling, just as diagnostic analysis does. But what this did mean was that Theranos no longer aspired to be a microfluidics/nanotech firm, but instead was in the business of automating conventional laboratory testing. This is a fine thing to do, of course, but it’s an area with much more competition from existing firms, like Siemens. No longer could Theranos honestly claim to be developing a wholly new, disruptive technology. What’s not clear is whether its financial backers, or its board, were told enough or had enough technical background to understand this.

The resulting prototype was called Edison 1.0 – and it sort-of worked. It could only do one class of tests – immunoassays, it couldn’t do many of these tests at the same time, and its results were not reproducible or accurate enough for clinical use. To fill in the gaps between what they promised their proprietary technology could do and its actual capabilities, Theranos resorted to modifying a commercial analysis machine – the Siemens Advia 1800 – to be able to analyse smaller samples. This was essential, to fulfil Theranos’s claimed USP, of being able to analyse the drops of blood from pin-pricks rather than the larger volumes taken for standard blood tests from a syringe and needle into a vein.

But these modifications presented their own difficulties. What they amounted to was simply diluting the small blood sample to make it go further – but of course this reduces the concentration of the molecules the analyses are looking for – often below the range of sensitivity of the commercial instruments. And there remained a bigger question, that actually hangs over the viability of the whole enterprise – can one take blood from a pin-prick that isn’t contaminated to an unknown degree by tissue fluid, cell debris and the like? Whatever the cause, it became clear that the test results Theranos were providing – to real patients, by this stage – were erratic and unreliable.

Theranos was working on a next generation analyser – the so-called miniLab – with the goal of miniaturising the existing lab testing methods to make a very versatile analyser. This project never came to fruition. Again, it was unquestionably an avenue worth pursuing. But Theranos wasn’t alone in this venture, and it’s difficult to see what special capabilities they brought that rivals with more experience and a longer track record in this area didn’t have already. Other portable analysers exist already (for example, the Piccolo Xpress), and the miniaturised technologies they would use were already in the market-place (for example, Theranos were studying the excellent miniaturised IR and UV spectrophotometers made by Ocean Optics – used in my own research group). In any case, events had overtaken Theranos before they could make progress with this new device.

Counting the cost and learning the lessons

What was the cost of this debacle? There was an human cost, not fully quantified, in terms of patients being given unreliable test results, which surely led to wrong diagnoses, missed or inappropriate treatments. And there is the opportunity cost – Theranos spent around $900 million, some of this on technology development, but rather too much on fees for lawyers and advertising agencies. But I suspect the biggest cost was the effect Theranos had slowing down and squeezing out innovation in an area that genuinely did have the potential to make a big difference to healthcare.

It’s difficult to read this story without starting to think that something is very wrong with intellectual property law in the United States. The original Theranos patent was astonishingly broad, and given the amount of money they spent on lawyers, there can be no doubt that other potential innovators were dissuaded from entering this field. IP law distinguishes between the conception of a new invention and its necessary “reduction to practise”. Reduction to practise can be by the testing of a prototype, but it can also be by the description of the invention in enough detail that it can be reproduced by another worker “skilled in the art”. Interpretation of “reduction to practise” seems to have become far too loose. Rather than giving the right to an inventor to benefit from a time-limited monopoly on an invention they’ve already got to work, patent law currently seems to allow the well-lawyered to carve out entire areas of potential innovation for their exclusive investigation.

I’m also struck from Carreyrou’s account by the importance of personal contacts in the establishment of Theranos. We might think that Silicon Valley is the epitome of American meritocracy, but key steps in funding were enabled by who was friends with who and by family relationships. It’s obvious that far too much was taken on trust, and far to little actual technical due diligence was carried out.

Carreyrou rightly stresses just how wrong it was to apply the Silicon Valley “fake it till you make it” philosophy to a medical technology company, where what follows from the fakery isn’t just irritation at buggy software, but life-and-death decisions about people’s health. I’d add to this a lesson I’ve written about before – doing innovation in the physical and biological realms is fundamentally more difficult, expensive and time-consuming than innovating in the digital world of pure information, and if you rely on experience in the digital world to form your expectations about innovation in the physical world, you’re likely to come unstuck.

Above all, Theranos was built on gullibility and credulousness – optimism about the inevitability of technological progress, faith in the eminence of the famous former statesmen who formed the Theranos board, and a cult of personality around Elizabeth Holmes – a cult that was carefully, deliberately and expensively fostered by Holmes herself. Magazine covers and TED talks don’t by themselves make a great innovator.

But in one important sense, Holmes was convincing. The availability of cheap, accessible, and reliable diagnostic tests would make a big difference to health outcomes across the world. The biggest tragedy is that her actions have set back that cause by many years.

The Rose of Temperaments

The colour of imaginary rain
falling forever on your old address…

Helen Mort

“The Rose of Temperaments” was a colour diagram devised by Goethe in the late 18th century, which matched colours with associated psychological and human characteristics. The artist Paul Evans has chosen this as a title for a project which forms part of Sheffield University’s Festival of the Mind; for it six poets have each written a sonnet associated with a colour. Poems by Angelina D’Roza and A.B. Jackson have already appeared on the project’s website; the other four will be published there over the next few weeks, including the piece by Helen Mort, from which my opening excerpt is taken.

Goethe’s theory of colour was a comprehensive cataloguing of the affective qualities of colours as humans perceive them, conceived in part as a reaction to the reductionism of Newton’s optics, much in the same spirit as Keats’s despair at the tendency of Newtonian philosophy to “unweave the rainbow”.

But if Newton’s aim was to remove the human dimension from the analysis of colour, he didn’t entirely succeed. In his book “Opticks”, he retains one important distinction, and leaves one unsolved mystery. He describes his famous experiments with a prism, which show that white light can be split into its component colours. But he checks himself to emphasise that when he talks about a ray of red light, he doesn’t mean that the ray itself is red; it has the property of producing the sensation of red when perceived by the eye.

The mystery is this – when we talk about “all the colours of the rainbow”, a moment’s thought tells us that a rainbow doesn’t actually contain all the colours there are. Newton recognised that the colour we now call magenta doesn’t appear in the rainbow – but it can be obtained by mixing two different colours of the rainbow, blue and red.

All this is made clear in the context of our modern physical theory of colour, which was developed in the 19th century, first by Thomas Young, and then in detail by James Clerk Maxwell. They showed, as most people know, that one can make any colour by mixing the three primary colours – red, green and blue – in different proportions.

Maxwell also deduced the reason for this – he realised that the human eye must comprise three separate types of light receptors, with different sensitivities across the visible spectrum, and that it is through the differential response of these different receptors to incident light that the brain constructs the sensation of colour. Colour, then, is not an intrinsic property of light itself, it is something that emerges from our human perception of light.

In the last few years, my group has been exploring the relationship between biology and colour from the other end, as it were. In our work on structural colour, we’ve been studying the microscopic structures that in beetle scales and bird feathers produce striking colours without pigments, through complex interference effects. We’re particularly interested in the non-iridescent colour effects that are produced by some structures that combine order and randomness in rather a striking way; our hope is to be able to understand the mechanism by which these structures form and then reproduce them in synthetic systems.

What we’ve come to realise as we speculate about the origin of these biological mechanisms is that to understand how these systems for producing biological coloration have evolved, we need to understand something about how different animals perceive colour, which is likely to be quite alien to our perceptions. Birds, for example, have not three different types of colour receptors, as humans do, but four. This means not just that birds can detect light outside human range of perception, but that the richness of their colour perception has an extra dimension.

Meanwhile, we’ve enjoyed having Paul Evans as an artist-in-residence in my group, working with my colleagues Dr Andy Parnell and Stephanie Burg on some of our x-ray scattering experiments. In addition to the poetry and colour project, Paul has put together an exhibition for Festival of the Mind, which can be seen in Sheffield’s Millennium Gallery for a week from 17th September. Paul, Andy and I will also be doing a talk about colour in art, physics and biology on September 20th, at 5 pm in the Spiegeltent, Barker’s Pool, Sheffield.

Your mind will not be uploaded

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Continue reading “Your mind will not be uploaded”

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.

A little history of bionanotechnology and nanomedicine

I wrote this piece as a briefing note in connection with a study being carried out by the Nuffield Council on Bioethics about Emerging Biotechnologies. I’m not sure whether bionanotechnology or nanomedicine should be considered as emerging biotechnologies, but this is an attempt to sketch out the connections.

Nanotechnology is not a single technology; instead it refers to a wide range of techniques and methods for manipulating matter on length scales from a nanometer or so – i.e. the typical size of molecules – to hundreds of nanometers, with the aim of creating new materials and functional devices. Some of these methods represent the incremental evolution of well-established techniques of applied physics, chemistry and materials science. In other cases, the techniques are at a much earlier state, with promises about their future power being based on simple proof-of-principle demonstrations.

Although nanotechnology has its primary roots in the physical sciences, it has always had important relationships with biology, both at the rhetorical level and in practical outcomes. The rhetorical relationship derives from the observation that the fundamental operations of cell biology take place at the nanoscale, so one might expect there to be something particularly powerful about interventions in biology that take place on this scale. Thus the idea of “nanomedicine” has been prominent in the promises made on behalf of nanotechnology from its earliest origins, and as a result has entered popular culture in the form of the exasperating but ubiquitous image of the “nanobot” – a robot vessel on the nano- or micro- scale, able to navigate through a patient’s bloodstream and effect cell-by-cell repairs. This was mentioned as a possibility in Richard Feynman’s 1959 lecture, “Plenty of Room at the Bottom”, which is widely (though retrospectively) credited as the founding manifesto of nanotechnology, but it was already at this time a common device in science fiction. The frequency with which conventionally credentialed nanoscientists have argued that this notion is impossible or impracticable, at least as commonly envisioned, has had little effect on the enduring hold it has on the popular imagination.
Continue reading “A little history of bionanotechnology and nanomedicine”

Three things that Synthetic Biology should learn from Nanotechnology

I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

1. Mind that metaphor
Metaphors in science are powerful and useful things, but they come with two dangers:
a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

2. Blowing bubbles in the economy of promises

Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

3. It’s not about risk, it’s about trust

The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.

On Descartes and nanobots

A couple of weeks ago I was interviewed for the Robots podcast special on on 50 years of robotics, and predictions for the next half century. My brief was nanorobots, and you can hear the podcast here. My pitch was that on the nanoscale we’d be looking to nature for inspiration, exploiting design principles such as self-assembly and macromolecular shape change; as a particularly exciting current development I singled out progress in DNA nanotechnology, and in particular the possibility of using this to do molecular logic. As it happens, last week’s edition of Nature included two very interesting papers reporting further developments in this area – Molecular robots guided by prescriptive landscapes from Erik Winfree’s group in Caltech, and A proximity-based programmable DNA nanoscale assembly line from Ned Seeman’s group in NYU.

The context and significance of these advances is well described in a News and Views article (full text); the references to nanorobots and nanoscale assembly lines have led to considerable publicity. James Hayton (who reads the Daily Mail so the rest of us don’t have to), in his 10e-9 blog comments very pertinently on the misleading use of classical nanobot imagery to illustrate this story. The Daily Mail isn’t the only culprit here – even the venerable Nature uses a still from the film Fantastic Voyage to illustrate their story, with the caption “although such machines are still a fantasy, molecular ‘robots’ made of DNA are under development.”

What’s wrong with these illustrations is that they are graphic representations of bad metaphors. DNA nanotechnology falls squarely in the soft nanotechnology paradigm – it depends on the weak interactions by which complementary sequences are recognised to enable the self-assembly of structures whose design is coded within the component molecules themselves, and macromolecular shape changes under the influence of Brownian motion to effect motion. Soft machines aren’t mechanical engineering shrunk, as I’ve written about at length on this blog and elsewhere.

But there’s another, more subtle point here. Our classical conception of a robot is something with sensors feeding information into a central computer, which responds to this sensory input by a computation, which is then effected by the communication of commands to the actuators that drive the robot’s actions. This separation of the “thinking” function of the robot from its sensing and action is something that we find very appealing; we are irresistibly drawn to the analogy with the way we have come to think about human beings since Descartes – as machines animated by an intelligence largely separate from our bodies.

What is striking about these rudimentary DNA robots is that what “intelligence” they possess – their capacity to sense the environment and process this information to determine which of a limited set of outcomes will be effected – arises from the molecules from which the robot is made and their interaction with a (specially designed) environment. There’s no sense in which the robot’s “program” is loaded into it; the program is implicit in the construction of the robot and its interaction with the environment. In this robot, “thought” and “action” are inseparable; the same molecules both store and process information and drive its motion.

In this, these proto-robots operate on similar general principles to bacteria, whose considerable information processing power arises from the interaction of many individual molecules with each other and with their physical environment (as beautifully described in Dennis Bray’s book Wetware: a computer in every living cell). Is this the only way to build a nanobot with the capacity to process and act on information about the environment? I’m not sure, but for the moment it seems to be the direction we’re moving in.