The theme of my book Soft Machines is that the nanomachines of biology operate under quite different design principles from those we are familiar with at the macroscale. These design principles exploit the different physics of the nanoworld, rather than trying to engineer around them. The combination of Brownian motion – the relentless shaking and jostling that’s ubiquitous in the nanoworld, at least at normal temperatures – and strong surface forces is exploited in the principle of self-assembly. Brownian motion and the floppiness of small scale structures are exploited in the principle of molecular shape change, which provides the way our muscles work. We are well on our way to exploiting both these principles in synthetic nanotechnology. But there’s another design principle that’s extensively used in Nature, that nanotechnologists have not yet exploited at all. This is the idea of chemical computing – processing information by using individual molecules as logic gates, and transmitting messages through space using the random motion of messenger molecules, driven to diffuse by Brownian motion. These are the mechanisms that allow bacteria to swim towards food and away from toxins, but they also underly the intricate way in which cells in higher organisms like mammals interact and differentiate.
One argument that holders of a mechanical conception of radical nanotechnology sometimes use against trying to copy these control mechanisms is that they are simply too complicated to deal with. But there’s an important distinction to make here. These control systems and signalling networks aren’t just complicated – they’re complex. Recent theory of the statistical mechanics of this sort of multiply connected, evolving networks is beginning to yield fascinating insights (see, for example, Albert-László Barabási’s website). It seems likely that these biological signalling and control networks have some generic features in common with other complex networks, such as the internet, and even, perhaps, free market economies. Rather than being the hopelessly complicated result of billions of years of aimless evolutionary accretion, we should perhaps think of these networks as being optimally designed for robustness in the noisy and unpredictable nanoscale environment.
It seems to me that if we are going to have nanoscale systems of any kind of complexity, we are going to have to embrace these principles. Maintaining rigid, central control of large scale systems always seems to be a superficially good idea, but such control systems are often brittle and fail to adapt to unpredictability, change and noise. The ubiquity of noise in the nanoscale world offers a strong argument for using complex, evolved control systems. But we still lack some essential tools for doing this. In particular, biological signalling relies on allostery. This principle underlies the operation of the basic logic gates in chemical computing; the idea is that when a messenger molecule binds to a protein, it subtly changes the shape of the protein and affects its ability to carry out a chemical operation. Currently synthetic analogues for this crucial function are very thin on the ground (see this abstract for something that seems to be going the right way). It would be good to see more effort put in this difficult, but exciting, direction.