Three things that Synthetic Biology should learn from Nanotechnology

I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

1. Mind that metaphor
Metaphors in science are powerful and useful things, but they come with two dangers:
a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

2. Blowing bubbles in the economy of promises

Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

3. It’s not about risk, it’s about trust

The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.

16 thoughts on “Three things that Synthetic Biology should learn from Nanotechnology”

  1. Excellent speech Richard, these are very helpful lessons.

    I have been pondering your first point about metaphors, I totally agree that they are so important, but the trick is to invent the effective language quick to counteract the geek language. Unless someone starts to articulate how synbio works in a more effective and socially useful way then, as you say, the metaphors stick and attitudes to a technology become fixed. It seems harsh to whinge that the public doesn’t understand a technology when all they have to go on are unhelpful distinctions, inappropriate metaphors and a dearth of information targeted in a language they can understand.

    Your Economy of Promises has certainly set in with nano where rather prosaic incremental consumer products are all we see at the moment. The difficulty with that for nano is that it is a relatively simple technology in some areas and there is nothing stopping someone inventing ‘nano water’ and other dodgy apps ‘infecting’, the whole ‘ology’.

    Your point about risk and trust has been backed up by our recent literature review of dialogues in emerging tech, in relation to how the public wanted to be involved and what information they wanted to see in relation to emerging tech. They are much more interested in the process of development than we expected in terms of building their trust in a technology. We think this is eminently communicable by universities, government and business without divulging any sensitive IP issues.

    This lesson on communication goes back to GM and others, the thing is no-one, particularly business, seem to have made even the slightest effort, so it is
    hardly surprising if there is suspicion in the face of what looks to many like ‘secrecy’.

    I fancy http://www.synbioandme.com to follow on from our http://www.nanoandme.com, wonder if I will be able to persuade anyone to fund it!

  2. Thanks, Hilary. One thing to consider when it comes to scientists’ use of metaphor is that it’s often used as a rhetorical tool to prosecute differences of opinion and promote points of view within the scientific community. It’s not necessarily the original intention that these metaphors leak out into the outside world, as it were, but if they’re powerful ones they inevitably will.

  3. Hi Richard,

    Very intereting analysis that resonates with my own research (I’m doing a masters thesis on nanotechnology in the Australian social context). In particular, your description of an ‘economy of promises’ and reflection that “nanotechnology was once the future”. This is also true in Australia – nanotech is now much less prominent in discussions about science and future technologies than it was a few years ago.

    Is it your view that nanotech has been – or is being – eclipsed by other newly defined realms of emerging technology? That is, the future has now been re-defined? Or perhaps you see things differently to this. I’d be interested in any further such reflections you might have.

    Regards,
    Stephen McGrail
    Postgraduate research student
    University of Melbourne

  4. To anybody interested in how metaphor works in science and elsewhere or anybody who just needs a ‘good read’, I would recommend “I is an Other: The secret life of metaphor and how it shapes the way we see the world” (2011) by James Geary. In the chapter on metaphor and science, he writes for example: “In science, metaphor tells you what things are like, not what they are…” (p. 177) [but that is not always apparent to people who get hold of these metaphors]. And: “Metaphors once forgotten or ignored, are easily mistaken for objective facts. If that happens in science, analogies congeal into dogmas, losing the elasticity that made them useful in the first place. Science is like a rice pudding: firm and fully formed on top but pliable and in constant motion underneath.” (p. 178)

  5. Stephen, the answer to your question depends on whether you think of nanotechnology as a socio-political project or as a collection of research fields or technological application areas. The research fields and technological applications are ploughing steadily on, but it is the urgency of the socio-political project that seems to be ebbing.

    Thanks for that recommendation, Brigitte, which sounds very interesting. To add an addendum to the quote about metaphors in science – sometimes metaphor tells you what you wish things to be like, rather than what they actually are like.

  6. Thanks Richard, that is an extremely useful distinction which I will incorporate into my thesis. My own research has mostly focussed on changing expectations and the responses to the ‘promise’ of nanotech. I guess a cental, interesting question is whether the spike in interest and subsequent ebbing is just an inevitable cycle (soon to be encountered by synbio) or whether something specific about nanotech has led the urgency of it – as a socio-political project – to ebb. Or a bit of both…

    Regards
    Stephen

  7. Hi

    My Sixpence…

    On a previous post, I believe that better communication between scientist, engineers and the Scientific Literati via better utilization of the Internet could overcome these very powerful complexity problems. I am writing to add that thinking about these problems from a software point of view WILL PREVAIL.

    The first reason is that over the last 40 years although Moore’s Law was crucial, radical new algorithms were crucial as well!!! (example FFT).

    Secondly, the same problems of how to control complex systems under noisy constraints crop up EVERYWHERE.

    That means that EVERYBODY IS WORKING ON THE SAME PROBLEMS.

    Perhaps a Solution (In Engineering Terms) has been found in the Complexity/Machine Learning communities and it comes under the heading of Smoothed Analysis.

    Once these ideas spread people will look at these problems in a radically new light and 2010 – 2020 will in the future not be the age of Austerity but BioNanoTech!

    Keep the Faith

    Zelah

  8. “such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil.”

    What is the basis for this sweeping statement? You’re essentially saying that even if molecular machines that DID work exactly like Drexler’s original vision are possible, if they need special conditions to work they’re worthless.

    You mention several problems that have nothing to do with the actual workings of these machines, just their efficiency.

    1. Spontaneous rearrangments eventually causing subunits of this type of machinery to fail (and need replacement, meaning the machine will have to be able to remove sections from itself and replace them with freshly manufactured parts)

    2. Frictional forces producing heat. The machine would be less efficient, and would need much more energy to create a given object than perhaps optimistic projections made in Nanosystems. A machine would need to be actively cooled with a huge external refrigeration unit and heat radiator relative to the size of the actual machinery generating products.

    3. Leaks in the molecular filters. Any process you use to filter out the molecules you don’t want, leaving only the ones you do want, cannot be 100% efficient. You could string together a series of steps, however, to get any arbitrary purity that is less than 100%. The more steps, of course, the more energy the machine would use. The point is, you don’t need 100% – eventually the machine will let through a molecule that causes it to fail, or misplace an atom, but this doesn’t mean the problem cannot be handled. Most likely the first generation machines would be an array of “print heads” on some kind of movable platform. When heads fail or jam, their internal control logic would recognize a fault has occurred and report it back to the control computer. The software in that computer would handle the problem by issuing new instructions to the still functioning subunits to do the tasks the failed subunit was assigned.

    4. A liquid cellular environment being hostile to the machinery. I agree entirely with that, and I don’t think these machines would be used for medical applications directly by putting them into a living body.

    But given these objections, here is why “software control of matter” would still be just as good an idea as all of those in favor of it say it is.

    Look, every one of your objections just slows the equipment down. It does NOT make it impractical, just much more costly in terms of energy needed to run it. You still have the critical advantages of

    1. Engineering simplicity. At the level of individual atoms, all matter is simple. You would be able to create ANY product of arbitrary complexity that you have an atomic blueprint for and a way to deal with unstable intermediates, so long as you can supply the energy necessary and have the time. You no longer need 10 million manufacturing processes to create 10 million products : you need just 1. This is a priceless advantage.

    2. Exponential growth. Even if this machinery takes a month to create an object the thickness of a sheet of paper, if you can replicate the actual assembly devices exponentially, you can be working on a product in parallel over over the place. And you can redesign what you are building to be composed of a set of very thin parts.

    3. Energy is not a problem. You are forgetting that the hard, phyiscal reason why energy IS a problem today is that humanity cannot produce the extremely complex machines needed to collect almost limitless energy without ridiculous amounts of human labor. More human labor = more cost. Here’s what I am talking about.
    There’s 2 easy ways to gather more energy than we can forsee a use for.
    a. Nuclear fission reactors. Nuclear reactors are some of the most complex machines built, and they are built in low volumes. This means that the human labor needed to build a working plant, much less a plant with many additional complex safety features to minimize the risk of a nasty disaster, is stupendous.

    However, if you could produce, in a set of vacuum chambers at cryogenic temperatures, all of the parts for a nuclear reactor in parallel over a period of 1-10 years, with completely automated equipment, the ultimate cost would be a fraction of the current cost.

    Each “part” would be a fully ready to function subunit of the final reactor, and would not need to be inspected because any misplaced atoms would be so few and far between as to be irrelevant. Human workers would still need to attach the subunits to each other, but this would be a task requiring minimal skill. Eventually you would design assembly robots able to assemble the final product, and “print” those out in parallel as well.

    Build a few nuclear reactors, and use the power generated to run more power sucking nanotechnology fabrication systems building more nuclear reactors, and so on. Energy would not be a problem, ultimately.
    b. Solar panels in space. Again, ultimately the reason we cannot do this now is because any satellite and any orbital rocket system requires colossal amounts of human labor to build – on the order of tens of thousands of dollars per kilogram. However, if we could manufacture the parts automatically, it would not cost any more than the costs of the raw materials and the energy.

    4. Controlled evolution. Ultimately, your pessimistic assumptions about performance are more than likely wrong. If you can build a single working assembler, you can use your first assembler to create many other possibilities that you would test. While the first generation systems would likely have many problems, if you can test billions of alternative designs in the real world, with millions of people working on the problem, ultimately someone will discover more optimal ways of making the system work.

    If there is A way to make an efficient molecular assembler that runs at extremely high speed, handles high temperatures and power densities, and so forth – then somone will discover it. Maybe it will use flexible internal parts, who knows. The reason to use rigid parts is to create a first generation system that actually works at all, and that we can use to produce the following generations. The same for the concept of the computer based on gears – it’s easy to model.

    Drexler of course mentioned nearly all of these ideas in his first books – maybe you should actually read them again. His central premise is that there’s A way to build a self replicating system, and that the advantages of self replication would be ultimately world changing. He does NOT ever say that the first systems would have anything close to the ultimate performance that might be possible someday. In addition, he believes the right way to approach the problem is to come up with possible designs, and to test them. You’re completely right that theoretical models of say, friction, are wrong. But this doesn’t mean that if we test 20 possible designs for a molecular motor or gear we won’t find that one or two of the designs really works.

    The public – the governments of the U.S. and the U.K. – have trusted you and other scientists to attempt to build a working molecular assembler. This is what they actually want from you, no matter how hard the problem looks at first. Maybe you should spend the money allocated to that task on attempts to build such a machine instead of claiming it isn’t ever possible and putting the money into research that will NEVER result in such a device.

    Finally, the medical applications.
    Recent research has showed that it is possible to freeze living organs to cryogenic temperatures and revive them. An animal organ is a very complex set of chemical bonds in an arbitrary molecular configuration. In the long run, you would freeze human donor organs cryogenically under special conditions that you have determined experimentally will allow you to revive the organ by heating it later. (since this has already been accomplished on a slightly smaller scale, it is possible)

    You then use a modified version of an assembler than disassembles a complex object instead, and under cryogenic conditions, with a large amount of energy, and over a time scale that might take years you create a molecular map of the entire organ. You now can create in parallel as many human organs as you need. Instead of “nanobots”, you repair human beings with limitless donor organs.

    The above is just an example of how to solve the problem. Again, so what if Drexler was slightly wrong about exactly how a molecular assembler might work or perform? Think like an engineer. GIVEN the limitations, how do I make a technology useful? GIVEN the limitations, how do I make a technology work at all? And spend the public’s money accordingly.

  9. Two more addendums : Richard, this already happened. The first molecular assemblers- biological molecules randomly constructed in earth’s primordial soup – were extremely slow and inefficient. They were also SIMPLE – random chance is unlikely to produce complex structures. In any case, eventually legacy decisions have constrained the system.

    Even though scientists can EASILY (in a single research lab) add extra amino acids and shift to a 4 codon base system, nature is stuck using 3 codons for an amino acid. This means that any upgrades to the codebase – going to 4 codons – would mean that a cell would not be able to read it’s own code, which includes millions of complex instructions the cell needs to compete with other cells.

    This means that nature is constrained because it cannot develop a novel system because the first versions of a technology would make a given living cell inferior to existing competitors. This is why natural systems look like they do, and why they work only in water, and cannot make any arbitrary molecular product, and so forth.

    This has critical roles for evolution – big evolutionary changes happen when a new space in ecology is opened up. When life moves into a new environment, free of competitors, it can try new possibilities that at first reduce performance. This is why the bacteria in hot springs and have such novel internal systems – no other life can move into that space. Same with the plants and animals at undersea geothermal vents.

    What am I saying? I’m saying that if we try to use biology to build our nanotechnology, WE will always be held back by many complex legacy decisions that nature has made over 3 billion years. This is why so much stuff doesn’t work reliably in the field, and why predictions are so bad, and so forth. 3 billions years of ‘cruft’ means that it would be like trying to upgrade a computer software system more complex than windows by injecting code into memory we can access. You might be able to make things happen, but you’re always going to be constrained by the bloat of the existing system.

    This means that just because nature cannot do better doesn’t mean we can’t EASILY do better. We, unlike nature, can build a complex system that only works at all if every component is present in a rigid orientation relative to other components, and will only work if all components are present. Nature COULD NOT ever build objects at the nanoscale with such criteria with what it has had to work with over the last 3 b.y.s

    I’m saying that the main dream – self replication – isn’t really that hard. Random chance did it – there’s probably a way for us to build a simple system the same way.

    I’m saying that once we have self replication the first time, we can create an entire ecosystem of devices based on ONE ACCOMPLISHMENT. ONE protocell is responsible for all life on the planet. ONE CELL. This is the only physical way the ecosystem we see could share all the same codon codings…there had to be a single place on the planet where the first protocell using ribosomes was at. This first cell probably did use components borrowed from other, earlier life, but ultimately it pushed aside everything else.

    So why are you and all the other scientists (but a handful) not working on the problem? It’s not your fault – it’s the system.

    You only are rewarded for research if you and a handful of other scientists create something working, right now. Nature, since it already is a self replicating system, is easy to hack and get it to do something new. (when I say “easy”, I mean with a few million dollars and tens of thousands of hours of lab time…a lot of work but nothing compared to the GDP of a nation).

    Moreover, in order to get working results with the real kind of nanotechnology – molecular manufacturing – you would need a collaboration – a team with thousands of scientists all working together, with guaranteed funding for a certain amount of time, and several years. No paper writing – just scientists actually coming up with possible designs for each crucial component of a basic assembler and building and testing them. Well, first, building and testing designs for equipment needed to build the parts for a prototype molecular assember. Probably a multi-headed STM with robotic control of the probes, software, and variable tooltips. That right there is a multi-million dollar machine and you’d need hundreds of them, one for each team of scientists.

    But rather than telling everyone it isn’t even worth doing – perhaps you should consider that just MAYBE it is possible, and the rewards for making it work WOULD be worth it if it were, and actual come up with a plan to make it happen?

    I don’t have my hope up, of course. I think that the only progress on this front will come from the semiconductor manufacturers, who will be at this scale sooner or later, and will probably construct equipment that could be modified to produce a prototype molecular assembler. (that is, in 20 years you could use semiconductor manufacturing equipment to build the parts for the first molecular assembler)

  10. Gerald, I still think you’ve missed my main point. You’re quite right to look to biology as an existence proof for molecular nanotechnology, but the key point about cell biology is that, because it operates in a very different environment to the one we’re used to, in which physics looks very different, the design principles it uses are very unfamiliar. Conversely, the design principles of our own macroscopic engineering are inappropriate in this nanoscale environment. It’s certainly true that some aspects of cell biology represent the frozen accidents of evolution, but I don’t think this is true of the fundamental design paradigm it uses – what I call in my book “soft machines”. And equally one doesn’t have to be restricted to using materials of biological origin to use these design principles, so it’s quite possible in principle to think of a non-biological molecular nanotechnology based on soft machine approaches. This will, though, look very different from the fundamentally mechanical paradigm of “Nanosystems”.

    See this earlier piece – Right and wrong lessons from biology – for more discussion.

  11. Richard : the reason to use stiff, inflexible parts, even if friction and random molecular motion causes serious problems, is because it is easier to predict what they will do. The goal is to achieve self replication, so that human civilization will have enough of this tool to develop the technology further.

    So you’re probably exactly right – we’ll design very inefficient, inflexible, error prone devices that ARE based on design principles from macroscopic engineering. These devices probably WILL still work – but you are correct, the living examples don’t work this way so my “will” is wishful thinking in some people’s view.

    Once we have them working, THEN we make the designs more complex by designing parts to be flexible and to self-correct for incorrect operations, if that in fact improves performance. Many of the complex interactions in protein based enzymes allow the enzyme to respond differently when an error happens and to correct it. Also, enzymes are such that many possible starting conditions are possible, but the enzyme flexes in a way that most starting conditions will still lead to the productive product being produced. And of course there’s a mess of unpredictable regulation.

    Do you honestly believe that it would be easier to build machines capable of producing any arbitary product starting with biological systems?

    Biological systems :
    Pros :
    1. They work. 2. They perform a very limited subset of possible chemistry very efficiently. 3. It is easy to reprogram them.
    Cons :
    1. legacy complexity means that it is probably next to impossible to reach the desired goal : arbitary control of matter.
    2. They require dirty environments which make arbitrary control of matter even less likely

    Vacuum phase, exotic condition systems
    Pros : 1. Components will operate under tightly controlled conditions that are CONSISTENT. This is also why you make the parts as stiff and inflexible as possible : for consistent behavior with the fewest number of states possible. I agree that CONSISTENT != PREDICTABLE. If we design a bearing using software, it is not going to behave that way in a real vacuum chamber in the real world. But however the object we build behaves, it will behave the SAME way every time. Nature has no such consistency.
    Cons : friction, vibration, unknown factors, and the one you base your decision to deny funding : we don’t know beyond doubt if self replication this way is possible.

  12. Anyways, simplifying it down.

    GOAL : achieve self replication with a technology that can eventually result in the ability to make any arbitrary arrangement of atoms.

    You believe that such a goal is not proven to be possible (technically, no technology we don’t already have fits that criteria) and thus is not even worth researching because it would take a very large amount of resources to develop even if it is possible.

    But, ASSUMING the goal is worth spending the money on, and the money does exist : over a billion a year is officially being spent on the research, and the U.S. Congress and President have been led to believe that the money IS going to that goal. They have evidently been misled, but THEY are the customers. NOT the scientists who are commissioned to do the work.

    Then how would you do it?

    The only logical way is to develop a technology from the ground up : one where we develop a series of simple mechanisms that work at the molecular level. The mechanisms need to be consistent, completely independent of biology, and they need to work under the best possible controlled conditions for maximum consistency. That is, the mechanisms we build must work in a vacuum free of all contaminants, fed with pure feedstocks, and at low temperatures.

    Once we develop many types of mechanisms (transistors, motors, gears, bearings, and so on and so forth) someone has to work out a way to assemble the mechanisms we know about into a working machine that will be able to assemble another copy of itself.

    Mechanisms can be flexible : there’s nothing wrong with that. However, the more constrained they are, the better.

    And they should not depend on fluids or conditions that are not likely to be present in a machine we are trying to make work reliably.

    The problem is that what I am describing is harder than what you are doing, is it not? And you can’t get credit without publishing results that actually work, right?

  13. Gerald, why do you say that I’ve made a “decision to deny funding”? You seem to have a very high regard for my influence! To remind you, I’m not an American and I don’t work in the USA, so I have nothing to do with what happens with US science policy. Meanwhile plenty of people across the world are working on single molecule manipulation at low temperatures and at UHV, including a few in the UK, and I’m pleased they’ve got funding for that. People who successfully demonstrate working nanoscale mechanisms and devices (whether in UHV, more bio-inspired versions in ambient conditions, or indeed electronic mechanisms such as single molecule transistors) have no problem publishing in very high profile journals and getting credit for it. But they’d be the first to stress that they’re a very long way away from the goal of making self-replicating machines.

  14. You’ve openly admitted it before, you are on one of the funding committees in the U.S., your own work is in fluid phase, and in general you keep repeating a view that is basically ‘politcally correct’. Your view is basically that all of the supposed advantages of self-replication are probably massively exagerrated, that we need to move very slowly and carefully, and that it’s better to do whatever research the individual scientists requesting grants want to do (“going where the technology takes them”) rather than something that could potentially lead to the end goal. You essentially have said that the killer reason to even do this is not possible. The reason why this technology would work well in theory is that

    a. Self replication would allow us to have huge numbers of assembly devices, available to everyone in principal. (though not in reality, probably, due to obvious security and intellectual property concerns)

    b. Complex objects are just chemical bonds at a high level of zoom, and it is possible to build a self replicating device that can perform arbitrary chemical synthesis steps, and to stabilize intermediates with some kind of control of electric charge. The final technology would be able to start with a molecular bonding map of the desired product. Software would convert the map into a series of steps at the nanoscale that would add and remove atoms and add and remove charge to stabilize intermediates. These steps would be executed by subunits of your machinery, allowing software control of matter.

    What makes me so angry about your ideas and decisions is that perhaps “a” and “b” are wrong. But they are the big reward…the REASON to spend huge sums of money on the research. A pair of socks that doesn’t stink, or windshields that don’t fog up is not the reason.

    Am I summarizing your views correctly? You do consider a and b to either be impossible, unlikely, or not worth it due to downsides like high energy consumption?

    That’s a fine view to have…but without working towards this final goal, why is nanotechnology research worth such a huge effort? How is research that DOESN’T lead to this final goal somehow even worth doing?

  15. And that’s exactly it. There’s a chance that a&b are not possible. Perhaps a large contigent of scientists consider a&b to not be worth considering because they can’t conceive of it working anytime in the plausible future. So there’s no way to justify the Manhattan project level of funding that would be needed to actually make a&b actually work, if it is possible.

    While clean smelling socks and fog resistant windows and various other gimmicks may not be world changing, but they are worthwhile things to have, and possibly worth spending the funds to research them. Scientific research as a whole does pay for itself, because the net result of centuries of research has expanded the world’s economies far more than the research ever cost.

    Your view is of the establishment. You’re happy with the way things are. You’ve got a nice job, tenure obviously, probably a stable family and you’re middle aged. You’ve got a nice position in the world as it is, far better than anything your ancestors ever had, and you’re happy to keep things like they are. Maybe no one will ever use the molecular motors you’re working to develop, but then again they might, and maybe you’ll discover something actually useful by accident along the way. You probably teach students as well, and many of them have gone on to do valuable, productive things.

    My view is that I want change. I see nothing in the laws of physics that prevents radical, tremendous changes. The laws of physics don’t prevent us from stopping all starvation and disease worldwide – we just need some way to create the necessities of life at miniscule cost. Self replicating machinery would give us such a thing. The laws of physics don’t prevent us from outlawing death itself – our very minds and personalities are just information, stored at the molecular level, and it’s possible to back up information of any type if you have the technology to read it.

    But if you went back in time to the 1880s, and told tenured, middle age faculty about the wonderful things they could have in a few years if they actually worked on it, they wouldn’t listen. Or the 1850s or whatnot. There was progress made during those times, but it was incremental and gradual and took decades. Even if you told a professor of 1880 exactly what it took to achieve supersonic jet travel, and explained that the energy stored in lamp oil was more than sufficient (if burned at a prodigious rate), and gave them the equations for air resistance, and demonstrated that bullets are supersonic, you wouldn’t make much progress.

    In the end, while I may strongly disagree with your views, it’s no mystery why you hold them. But try to look past them for a moment…and ask yourself…what is actually possible, versus what it’s popular to say is possible?

  16. For a summary of what happened on my watch in the small world of UK nanotechnology, take a look here:
    http://www.softmachines.org/wordpress/?p=516
    Since you mention “software control of matter”, you might like to look at what projects were funded under a project with that title that I directed:
    http://www.softmachines.org/wordpress/?p=275

    Clearly we disagree, but maybe you might consider the possibility that I hold the views I hold, not because it suits my station in life, but because I’ve thought about and researched these issues? (And I wouldn’t mind your amateur psychology if it was at all consistent with what I’ve written over the years, which it isn’t).

Comments are closed.