We sold out our energy future

Everyone should know that the industrial society we live in depends on access to plentiful, convenient, cheap energy – the last two hundred years of rapid economic growth has been underpinned by the large scale use of fossil fuels. And everyone should know that the effect of burning those fossil fuels has been to markedly increase the carbon dioxide content of the atmosphere, resulting in a changing climate, with potentially dangerous but still uncertain consequences. But a transition from fossil fuels to low carbon sources of energy isn’t going to take place quickly; existing low carbon energy sources are expensive and difficult to scale up. So rather than pushing on with the politically difficult, slow and expensive business of deploying current low carbon energy sources, why don’t we wait until technology brings us a new generation of cheaper and more scalable low carbon energy? Presumably, one might think, since we’ve known about these issues for some time, we’ve been spending the last twenty years energetically doing research into new energy technologies?

Alas, no. As my graph shows, the decade from 1980 saw a worldwide decline in the fraction of GDP major industrial countries devoted to government funded energy research, development, and demonstration, with only Japan sustaining anything like its earlier intensity of energy research into the 1990s. It was only in the second half of the decade after 2000 that we began to see a recovery, though in the UK and the USA a rapid upturn following the 2007 financial crisis has fallen away again. A rapid post-2000 growth of energy RD&D in Korea is an exception to the general picture. There’s a good discussion of the situation in the USA in a paper by Kamman and Nemet – Reversing the incredible shrinking energy R&D budget. But the largest fall by far was in the UK, where at its low point, the fraction of national resource devoted to energy RD&D fell, in 2003, to an astonishing 0.2% of its value at the 1981 high point.

Government spending on energy research, development and demonstration
Government spending on energy research, development and demonstration. Data: International Energy Authority

Continue reading “We sold out our energy future”

Why isn’t the UK the centre of the organic electronics industry?

In February 1989, Jeremy Burroughes, at that time a postdoc in the research group of Richard Friend and Donal Bradley at Cambridge, noticed that a diode structure he’d made from the semiconducting polymer PPV glowed when a current was passed through it. This wasn’t the first time that interesting optoelectronic properties had been observed in an organic semiconductor, but it’s fair to say that it was the resulting Nature paper, which has now been cited more than 8000 times, that really launched the field of organic electronics. The company that they founded to exploit this discovery, Cambridge Display Technology, was floated on the NASDAQ in 2004 at a valuation of $230 million. Now organic electronics is becoming mainstream; a popular mobile phone, the Samsung Galaxy S, has an organic light emitting diode screen, and further mass market products are expected in the next few years. But these products will be made in factories in Japan, Korea and Taiwan; Cambridge Display Technology is now a wholly owned subsidiary of the Japanese chemical company Sumitomo. How is it that despite an apparently insurmountable academic lead in the field, and a successful history of University spin-outs, that the UK is likely to end up at best a peripheral player in this new industry? Continue reading “Why isn’t the UK the centre of the organic electronics industry?”

Do materials even have genomes?

I’ve long suspected that physical scientists have occasional attacks of biology envy, so I suppose I shouldn’t be surprised that the US government announced last year the “Materials Genome Initiative for Global Competiveness”. Its aim is to “discover, develop, manufacture, and deploy advanced materials at least twice as fast as possible today, at a fraction of the cost.” There’s a genuine problem here – for people used to the rapid pace of innovation in information technology, the very slow rate at which new materials are taken up in new manufactured products is an affront. The solution proposed here is to use those very advances in information technology to boost the rate of materials innovation, just as (the rhetoric invites us to infer) the rate of progress in biology has been boosted by big data driven projects like the human genome project.

There’s no question that many big problems could be addressed by new materials. Continue reading “Do materials even have genomes?”

Responsible innovation – some lessons from nanotechnology

A few weeks ago I gave a lecture at the University of Nottingham to a mixed audience of nanoscientists and science and technology studies scholars with the title “Responsible innovation – some lessons from nanotechnology”. The lecture was recorded, and the audio can be downloaded, together with the slides, from the Nottingham STS website.

Some of the material I talked about is covered in my chapter in the recent book Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies. A preprint of the chapter can be downloaded here: What has nanotechnology taught us about contemporary technoscience?”

Geek power?

Mark Henderson’s book “The Geek Manifesto” was part of my holiday reading, and there’s a lot to like in it – there’s all too much stupidity in public life, and anything that skewers a few of the more egregious recent examples of this in such a well-written and well-informed way must be welcomed. There is a fundamental lack of seriousness in our public discourse, a lack of respect for evidence, a lack of critical thinking. But to set against many excellent points of detail, the book is built around one big idea, and it’s that idea that I’m less keen on. This is the argument – implicit in the title – that we should try to construct some kind of identity politics based around those of us who self-identify as being interested in and informed about science – the “geeks”. I’m not sure that this is possible, but even if it was, I think it would be bad for science and bad for politics. This isn’t to say that public life wouldn’t be better if more people with a scientific outlook had a higher profile. One very unwelcome feature of public debate is the prevalence of wishful thinking. Comfortable beliefs that fit into people’s broader world-views do need critical examination, and this often needs the insights of science, particularly the discipline that comes from seeing whether the numbers add up. But science isn’t the only source of the insights needed for critical thinking, and scientists can have some surprising blind-spots, not just about the political, social and economic realities of life, but also about technical issues outside their own fields of interest.

But first, who are these geeks who Henderson thinks should organise? Continue reading “Geek power?”

The UK’s thirty year experiment in innovation policy

In 1981 the UK was one of the world’s most research and development intensive economies, with large scale R&D efforts being carried out in government and corporate laboratories in many sectors. Over the thirty years between then and now, this situation has dramatically changed. A graph of the R&D intensity of the national economy, measured as the fraction of GDP spent on research and development, shows a long decline through the 1980’s and 1990’s, with some levelling off from 2000 or so. During this period the R&D intensity of other advanced economies, like Japan, Germany, the USA and France, has increased, while in fast developing countries like South Korea and China the growth in R&D intensity has been dramatic. The changes in the UK were in part driven by deliberate government policy, and in part have been the side-effects of the particular model of capitalism that the UK has adopted. Thirty years on, we should be asking what the effects of this have been on our wider economy, and what we should do about it.

A comparison of gross research and development expenditure of various countries from 1981 to 2010
Gross expenditure on research and development as a % of GDP from 1981 to 2010. Data from Eurostat.

The second graph breaks down where R&D takes place. The largest fractional fall has been in research in government establishments, which has dropped by more than 60%. The largest part of this fall took place in the early part of the period, under a series of Conservative governments. This reflects a general drive towards a smaller state, a run-down of defence research, and the privatisation of major, previously research intensive sectors such as energy. However, it is clear that privatisation didn’t lead to a transfer of the associated R&D to the business sector. It is in the business sector that the largest absolute drop in R&D intensity has taken place – from 1.48% of GDP to 1.08%. Cutting government R&D didn’t lead to increases in private sector R&D, contrary to the expectations of free marketeers who think the state “crowds out” private spending. Instead the business climate of the time, with a drive to unlock “shareholder value” in the short-term, squeezed out longer term investments in R&D. Some seek to explain this drop in R&D intensity in terms of a change in the sectoral balance of the UK economy, away from manufacturing and towards financial services, and this is clearly part of the picture. However, I wonder whether this should be thought of not so much as an explanation, but more as a symptom. I’ve discussed in an earlier post the suggestion that “bad capitalism” – for example, speculations in financial and property markets ,with the downside risk being shouldered by the tax-payer – squeezes out genuine innovation.

UK R&D as % of GDP by sector of performance from 1981 to 2010
UK R&D as % of GDP by sector of performance from 1981 to 2010. Data from Eurostat.

The Labour government that came to power in 1997 did worry about the declining R&D intensity of the UK economy, and, in its Science Investment Framework 2004-2014 (PDF), set about trying to reverse the trend. This long-term policy set a target of reaching an overall R&D intensity of 2.5% by 2014, and an increase in R&D intensity in the business sector from to 1.7%. The mechanisms put in place to achieve this included a period of real-terms increase in R&D spending by government, some tax incentives for business R&D, and a new agency for nearer term research in collaboration with business, the Technology Strategy Board. In the event, the increases in government spending on R&D did lead to some increase in the UK’s overall research intensity, but the hoped-for increase in business R&D simply did not happen.

This isn’t predominantly a story about academic science, but it provides a context that’s important to appreciate for some current issues in science policy. Over the last thirty years, the research intensity of the UK’s university sector has increased, from 0.32% of GDP to 0.48% of GDP. This reflects, to some extent, real-term increases in government science budgets, together with the growing success of universities in raising research funds from non UK-government sources. The resulting R&D intensity of the UK HE sector is at the high end of international comparisons (the corresponding figures for Germany, Japan, Korea and the USA are 0.45%, 0.4%, 0.37% and 0.36%). But where the UK is very much an outlier is in the proportion of the country’s research that takes place in universities. This proportion now stands at 26%, which is much higher than international competitors (again, we can compare with Germany, Japan, Korea and the USA, where the proportions are 17%, 12%, 11% and 13%), and much higher now than it has been historically (in 1981 it was 14%). So one way of interpreting the pressure on universities to demonstrate the “impact” of their research, which is such a prominent part of the discourse in UK science policy at the moment, is as a symptom of the disproportionate importance of university research in the overall national R&D picture. But the high proportion of UK R&D carried out in universities is as much a measure of the weakness of the government and corporate applied and strategic research sectors as the strength of its HE research enterprise. The worry, of course, has to be that, given the hollowed-out state of the business and government R&D sectors, where in the past the more applied research needed to convert ideas into new products and services was done, universities won’t be able to meet the expectations being placed on them.

To return to the big picture, I’ve seen surprisingly little discussion of the effects on the UK economy of this dramatic and sustained decrease in research intensity. Aside from the obvious fact that we’re four years into an economic slump with no apparent prospect of rapid recovery, we know that the UK’s productivity growth has been unimpressive, and the lack of new, high tech companies that grow fast to a large scale is frequently commented on – where, people ask, is the UK’s Google? We also know that there are urgent unmet needs that only new innovation can fulfil – in healthcare, in clean energy, for example. Surely now is the time to examine the outcomes of the UK’s thirty year experiment in innovation theory.

Finally, I think it’s worth looking at these statistics again, because they contradict the stories we tell about ourselves as a country. We think of our postwar history as characterised by brilliant invention let down by poor exploitation, whereas the truth is that the UK, in the thirty post-war years, had a substantial and successful applied research and development enterprise. We imagine now that we can make our way in the world as a “knowledge economy”, based on innovation and brain-power. I know that innovation isn’t always the same as research and development, but it seems odd that we should think that innovation can be the speciality of a nation which is substantially less intensive in research and development than its competitors. We should worry instead that we’re in danger of condemning ourselves to being a low innovation, low productivity, low growth economy.

When technologies can’t evolve

In what way, and on what basis, should we attempt to steer the development of technology? This is the fundamental question that underlies at least two discussions that I keep coming back to here – how to do industrial policy and how to democratise science. But some would simply deny the premise of these discussions, and argue that technology can’t be steered, and that the market is the only effective way of incorporating public preferences into decisions about technology development. This is a hugely influential point of view which goes with the grain of the currently hegemonic neo-liberal, free market dominated world-view. It originates in the arguments of Friedrich Hayek against the 1940’s vogue for scientific planning, it incorporates Michael Polanyi’s vision of an “independent republic of science”, and it fits the view of technology as an autonomous agent which unfolds with a logic akin to that of Darwinian evolution – what one might called the “Wired” view of the world, eloquently expressed in Kevin Kelly’s recent book “What Technology Wants”. It’s a coherent, even seductive, package of beliefs; although I think it’s fatally flawed, it deserves serious examination.

Hayek’s argument against planning (his 1945 article The Use of Knowledge in Society makes this very clearly) rests on two insights. Firstly, he insists that the relevant knowledge that would underpin the rational planning of an economy or a society isn’t limited to scientific knowledge, and must include the tacit, unorganised knowledge of people who aren’t experts in the conventional sense of the word. This kind of knowledge, then, can’t rest solely with experts, but must be dispersed throughout society. Secondly, he claims that the most effective – perhaps the only – way in which this distributed knowledge can be aggregated and used is through the mechanism of the market. If we apply this kind of thinking to the development of technology, we’re led to the idea that technological development would happen in the most effective way if we simply allow many creative entrepreneurs to try different ways of combining different technologies and to develop new ones on the basis of existing scientific knowledge and what developments of that knowledge they are able to make. When the resulting innovations are presented to the market, the ones that survive will, by definition, the ones that best meet human needs. Stated this way, the connection with Darwinian evolution is obvious.

One objection to this viewpoint is essentially moral in character. The market certainly aggregates the preferences and knowledge of many people, but it necessarily gives more weight to the views of people with more money, and the distribution of money doesn’t necessarily coincide with the distribution of wisdom or virtue. Some free market enthusiasts simply assert the contrary, following Ayn Rand. There are, though, some much less risible moral arguments in favour of free markets which emphasise the positive virtues of pluralism, and even those opponents of libertarianism who point to the naivety of believing that this pluralism can be maintained in the face of highly concentrated economic and political power need to answer important questions about how pluralism can be maintained in any alternative system.

What should be less contentious than these moral arguments is an examination of the recent history of technological innovation. This shows that the technologies that made the modem world – in all their positive and negative aspects – are largely the result of the exercise of state power, rather than of the free enterprise of technological entrepreneurs. New technologies were largely driven by large scale interventions by the Warfare States that dominated the twentieth century. The military-industrial complexes of these states began long before Eisenhower popularised this name, and existed not just in the USA, but in Wilhelmine and Nazi Germany, in the USSR, and in the UK (David Edgerton’s “Warfare State: Britain 1920- 1970” gives a compelling reinterpretation of modern British history in these terms). At the beginning of the century, for example, the Haber-Bosch process for fixing nitrogen was rapidly industrialised by the German chemical company BASF. It’s difficult to think of a more world-changing innovation – more than half the world’s population wouldn’t now be here if it hadn’t been for the huge growth in agricultural productivity that artificial fertilisers made possible. However, the importance of this process for producing the raw materials for explosives ensured that the German state took much more than a spectator’s role. Vaclav Smil, in his book Enriching the Earth, quotes an estimate for the development cost of the Haber-Bosch process of US$100 million at 1919 prices (roughly US$1 billion in current money, equating to about $19 billion in terms of its share of the economy at the time), of which about half came from the government. Many more recent examples of state involvement in innovation are cited in Mariana Mazzucato’s pamphlet The Entrepreneurial State. Perhaps one of the most important stories is the role of state spending in creating the modern IT industry; computing, the semiconductor industry and the internet are all largely the outcome of US military spending.

Of course, the historical fact that the transformative, general purpose technologies that were so important in driving economic growth in the twentieth century emerged as a result of state sponsorship doesn’t by itself invalidate the Hayekian thesis that innovation is best left to the free market. To understand the limitations of this picture, we need to return to Hayek’s basic arguments. Under what circumstances does the free market fail to aggregate information in an optimal way? People are not always rational economic actors – they know what they want and need now, but they aren’t always good at anticipating what they might want if things they can’t imagine become available, or what they might need if conditions change rapidly. There’s a natural cognitive bias to give more weight to the present, and less to an unknowable future. Just like natural selection, the optimisation process that the market carries out is necessarily local, not global.

So when does the Hayekian argument for leaving innovation to the market not apply? The free market works well for evolutionary innovation – local optimisation is good at solving present problems with the tools at hand now. But it fails to be able to mobilise resources on a large scale for big problems whose solution will take more than a few years. So, we’d expect market-driven innovation to fail to deliver whenever timescales for development are too long, or the expense of development too great. Because capital markets are now short-term to the point of irrationality (as demonstrated by this study (PDF) from the Bank of England by Andrew Haldane), the private sector rejects long term investments in infrastructure and R&D, even if the net present value of those investments would be significantly positive. In the energy sector, for example, we saw widespread liberalisation of markets across the world in the 1990s. One predictable consequence of this has been a collapse of private sector R&D in the energy sector (illustrated for the case of the USA by Dan Kammen here – The Incredible Shrinking Energy R&D Budget (PDF)).

The contrast is clear if we compare two different cases of innovation – the development of new apps for the iPhone, and the development of innovative new passenger aircraft, like the composite-based Boeing Dreamliner and Airbus A350. The world of app development is one in which tens or hundreds of thousands of people can and do try out all sorts of ideas, a few of which have turned out to fulfil an important and widely appreciated need and have made their developers rich. This is a world that’s well described by the Hayekian picture of experimentation and evolution – the low barriers to entry and the ease of widespread distribution of the products rewards experimentation. Making a new airliner, in contrast, involves years of development and outlays of tens of billions of dollars in development cost before any products are sold. Unsurprisingly, the only players are two huge companies – essentially a world duopoly – each of whom is in receipt of substantial state aid of one form or another. The lesson is that technological innovation doesn’t just come in one form. Some innovation – with low barriers to entry, often building on existing technological platforms – can be done by individuals or small companies, and can be understood well in terms of the Hayekian picture. But innovation on a larger scale, the more radical innovation that leads to new general purpose technologies, needs either a large company with a protected income stream or outright state action. In the past the companies able to carry out innovation on this scale would typically have been a state sponsored “national champion”, supported perhaps by guaranteed defense contracts, or the beneficiary of a monopoly or cartel, such as the postwar Bell Labs.

If the prevalence of this Hayekian thinking about technological innovation really does mean that we’re less able now to introduce major, world-changing innovations than we were 50 years ago, this would matter a great deal. One way of thinking about this is in evolutionary terms – if technological innovation is only able to proceed incrementally, there’s a risk that we’re less able to adapt to sudden shocks, we’re less able to anticipate the future and we’re at risk of being locked into technological trajectories that we can’t alter later in response to unexpected changes in our environment or unanticipated consequences. I’ve written earlier about the suggestion that, far from seeing universal accelerating change, we’re currently seeing innovation stagnation. The risk is that we’re seeing less in the way of really radical innovation now, at a time when pressing issues like climate change, peak cheap oil and demographic transitions make innovation more necessary than ever. We are seeing a great deal of very rapid innovation in the world of information, but this rapid pace of change in one particular realm has obscured much less rapid growth in the material realm and the biological realm. It’s in these realms that slow timescales and the large scale of the effort needed mean that the market seems unable to deliver the innovation we need.

It’s not going to be possible, nor would it be desirable, for us to return to the political economies of the mid-twentieth century warfare states that delivered the new technologies that underlie our current economies. Whatever other benefits the turn to free markets may have delivered, it seems to have been less effective at providing radical innovation, and with the need for those radical innovations becoming more urgent, some rethinking is now urgently required.

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.

Where the randomness comes from

For perhaps 200 years it was possible to believe that physics gave a picture of the world with no place for randomness. Newton’s laws prescribe a picture of nature that is completely deterministic – at any time, the future is completely specified by the present. For anyone attached to the idea that they have some control over their destiny, that the choices they make have any influence on what happens to them, this seems problematic. Yet the idea of strict physical determinism, the idea that free will is an illusion in a world in which the future is completely predestined by the laws of physics, remains strangely persistent, despite the fact that it isn’t (I believe) supported by our current scientific understanding.

The mechanistic picture of a deterministic universe received a blow with the advent of quantum mechanics, which seems to introduce an element of randomness to the picture – in the act of “measurement”, the state function of a quantum system discontinuously changes according to a law which is probabilistic rather than deterministic. And when we look at the nanoscale world, at least at the level of phenomenology, randomness is ever-present, summed up in the phenomenon of Brownian motion, and leading inescapably to the second law of thermodynamics. And, of course, if we are talking about human decisions (should we go outside in the rain, or have another cup of tea?) the physical events in the brain that initiate the process of us opening the door or putting the kettle on again are strongly subject to this randomness; those physical events, molecules diffusing across synapses, receptor molecules changing shape in response to interactions with signalling molecules, shock waves of potential running up membranes as voltage-gated pores in the membrane open and close, all take place in that warm, wet, nanoscale domain in which Brownian motion dominates and the dynamics is described by Langevin equations, complete with their built-in fluctuating forces. Is this randomness real, or just an appearance? Where does it come from?

I suspect the answer to this question, although well-understood, is not necessarily widely appreciated. It is real randomness – not just the appearance of randomness that follows from the application of deterministic laws in circumstances too complex to model – and its ultimate origin is indeed in the indeterminism of quantum mechanics. To understand how the randomness of the quantum realm gets transmitted into the Brownian world, we need to remember first that the laws of classical, Newtonian physics are deterministic, but only just. If we imagine a set of particles interacting with each other through well-known forces, defined through potentials of the kind you might use in a molecular dynamics simulation, the way in which the system evolves in time is in principle completely determined, but in practise any small perturbation to the deterministic laws (such as a rounding error in a computer simulation) will have an effect which grows with time to widen the range of possible outcomes that the system will explore, a widening that macroscopically we’d interpret as an increase in the entropy of the system.

To understand where, physically, this perturbation might come from we have to ask where the forces between molecules originate, as they interact and bounce off each other. One ubiquitous force in the nanoscale world is known to chemists as the Van der Waals force. In elementary physics and chemistry, this is explained as a force that arises between two neutral objects when a randomly arising dipole in one object induces an opposite dipole in the other object, and the two dipoles then attract each other. Another, perhaps deeper, way of thinking about this force is due to the physicists Casimir and Lifshitz, who showed that it arises from the way objects modify the quantum fluctuations that are always present in the vacuum – the photons that come in and out of existence even in the emptiest of empty spaces. This way of thinking about the Van der Waals force makes clear that because the force arises from the quantum fluctuations of the vacuum, the force must itself be fluctuating – it has an intrinsic randomness that is sufficient to explain the randomness we observe in the nanoscale world.

So, to return to the question of whether free will is compatible with physical determinism, we can now see that this is not an interesting question, because rules that govern the operation of the brain are fundamentally not deterministic. Of course, the question of how free will might emerge from a non-deterministic, stochastic system isn’t of course a trivial question either, but at least it starts from the right premise – we can say categorically that strict physical determinism, as applied to the operation of the brain, is false. The brain is not a deterministic system, but one in which randomness is central and inescapable to its operation.

One might go on to ask why some people are so keen to hold on to the idea of strict physical determinism, more than a hundred years after the discoveries of quantum mechanics and statistical mechanics that makes determinism untenable? This is too big a question for me to even attempt to answer here, but maybe it’s worth pointing out that there seems to be quite a lot of of determinism around – in addition to physical determinism, genetic determinism and technological determinism seem to be attractive to many people at the moment. Of course, the rise of the Newtonian mechanistic world-view occurred at a time when a discussion about the relationship between free will and a theological kind of determinism was very current in Christian Europe, and I’m tempted to wonder whether the appeal of these modern determinisms might be part of the lingering legacy of Augustine of Hippo and Calvin to the modern age.

Slouching towards an industrial policy

The UK’s Science Minister, David Willetts, gave a speech last week on “Our High Tech Future”. The headlines about it were dominated by one somewhat odd policy announcement, which I’ll come to later, but what’s more interesting is the fact that he chose (apparently at quite short notice) to give the speech at all, only weeks after the publication of a strategy for “Innovation and Research for Growth”, that was widely regarded as, at best, a retrospective attempt to give coherence to a series of rather random acts of policy. I’m tempted to interpret the speech as a signal that a not completely formed government policy is still evolving in some quite interesting directions. In short, after 32 years, the Conservatives are rediscovering the need for industrial policy.
Continue reading “Slouching towards an industrial policy”