What hope against dementia?

An essay review of Kathleen Taylor’s book “The Fragile Brain: the strange, hopeful science of dementia”, published by OUP.

I am 56 years old; the average UK male of that age can expect to live to 82, at current levels of life expectancy. This, to me, seems good news. What’s less good, though, is that if I do reach that age, there’s about a 10% chance that I will be suffering from dementia, if the current prevalence of that disease persists. If I were a woman, at my age I could expect to live to nearly 85; the three extra years come at a cost, though. At 85, the chance of a woman suffering from dementia is around 20%, according to the data in Alzheimers Society Dementia UK report. Of course, for many people of my age, dementia isn’t a focus for their own future anxieties, it’s a pressing everyday reality as they look after their own parents or elderly relatives, if they are among the 850,000 people who currently suffer from dementia. I give thanks that I have been spared this myself, but it doesn’t take much imagination to see how distressing this devastating and incurable condition must be, both for the sufferers, and for their relatives and carers. Dementia is surely one of the most pressing issues of our time, so Kathleen Taylor’s impressive overview of the subject is timely and welcome.

There is currently no cure for the most common forms of dementia – such as Alzheimer’s disease – and in some ways the prospect of a cure seems further away now than it did a decade ago. The number of drugs which have been demonstrated to work to cure or slow down Alzheimer’s disease remains at zero, despite billions of dollars having been spent in research and drug trials, and it’s arguable that we understand less now about the fundamental causes of these diseases, than we thought we did a decade ago. If the prevalence of dementia remains unchanged, by 2051, the number of dementia sufferers in the UK will have increased to 2 million.

This increase is the dark side of the otherwise positive story of improving longevity, because the prevalence of dementia increases roughly exponentially with age. To return to my own prospects as a 56 year old male living in the UK, one can make another estimate of my remaining lifespan, adding the assumption that the increases in longevity we’ve seen recently continue. On the high longevity estimates of the Office of National Statistics, an average 56 year old man could expect to live to 88 – but at that age, there would be a 15% chance of suffering from dementia. For woman, the prediction is even better for longevity – and worse for dementia – with an life expectancy of 91, but a 20% chance of dementia (there is a significantly higher prevalence of dementia for women than men at a given age, as well as systematically higher life expectancy). To look even further into the future, a girl turning 16 today can expect to live to more than 100 in this high longevity scenario – but that brings her chances of suffering dementia towards 50/50.

What hope is there for changing this situation, and finding a cure for these diseases? Dementias are neurodegenerative diseases; as they take hold, nerve cells become dysfunctional and then die off completely. They have different effects, depending on which part of the brain and nervous system is primarily affected. The most common is Alzheimer’s disease, which accounts for more than half of dementias in the over 65’s, and begins by affecting the memory, and then progresses to a more general loss of cognitive ability. In Alzheimer’s, it is the parts of the the brain cortex that deal with memory that atrophy, while in frontotemporal dementia it is the frontal lobe and/or the temporal cortex that are affected, resulting in personality changes and loss of language. In motor neurone diseases (of which the most common is ALS, amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease), it is the nerves in the brainstem and spinal cord that control the voluntary movement of muscles that are affected, leading to paralysis, breathing difficulties, and loss of speech. The mechanisms underlying the different dementias and other neurodegenerative diseases differ in detail, but they have features in common and the demarcations between them aren’t always well defined.

It’s not easy to get a grip on the science that underlies dementia – it encompasses genetics, cell and structural biology, immunology, epidemiology, and neuroscience in all its dimensions. Taylor’s book gives an outstanding and up-to-date overview of all these aspects. It’s clearly written, but it doesn’t shy away from the complexity of the subject, which makes it not always easy going. The book concentrates on Alzheimer’s disease, taking that story from the eponymous doctor who first identified the disease in 1901.

Dr Alois Alzheimer identified the brain pathology characteristic of Alzheimer’s disease – including the characteristic “amyloid plaques”. These consist of strongly associated, highly insoluble aggregates of protein molecules; subsequent work has identified both the protein involved and the structure it forms. The structure of amyloids – in which protein chains are bound together in sheets by strong hydrogen bonds – can be found in many different proteins, (I discussed this a while ago on this blog, in Death, Life and Amyloids) and when these structures occur in biological systems they are usually associated with disease states. In Alzheimer’s, the particular protein involved is called Aβ; this is a fragment of a larger protein of unknown function called APP (for amyloid precursor protein). Genetic studies have shown that mutations that involve the genes coding for APP and for the enzymes that snip the Aβ off the end of the APP, lead to more production of Aβ, more amyloid formation, and are associated with increased susceptibility to Alzheimer’s disease. The story seems straightforward, then – more Aβ leads to more amyloid, and the resulting build-up of insoluble crud in the brain leads to Alzheimer’s disease. This is the “amyloid hypothesis”, in its simplest form.

But things are not quite so simple. Although the genetic evidence linking Aβ to Alzheimer’s is strong, there are doubts about the mechanism. It turns out that the link between the presence of amyloid plaques themselves and the disease symptoms isn’t as strong as one might expect, so attention has turned to the possibility that it is the precursors to the full amyloid structure, where a handful of Aβ molecules come together to make smaller units – oligomers – which are the neurotoxic agents. Yet the mechanism by which these oligomers might damage the nerve cells remains uncertain.

Nonetheless, the amyloid hypothesis has driven a huge amount of scientific effort, and it has motivated the development of a number of potential drugs, which aim to interfere in various ways with the processes by which Aβ has formed. These drugs have, so far without exception, failed to work. Between 2002 and 2012 there were 413 trials of drugs for Alzheimer’s; the failure rate was 99.6%. The single successful new drug – memantine – is a cognitive enhancer which can relieve some symptoms of Alzheimer’s, without modifying the cause of the disease. This represents a colossal investment of money – to be measured at least in the tens of billions of dollars – for no return so far.

In November last year, Eli Lilly announced that its anti Alzheimer’s antibody, solanezumab, which was designed to bind to Aβ, failed to show a significant effect in phase 3 trials. After the failure of another phase III trial this February, of Merck’s beta-secretase inhibitor verubecestat, designed to suppress the production of Aβ, the medicinal chemist and long-time commentator on the pharmaceutical industry Derek Lowe wrote: “Beta-secretase inhibitors have failed in the clinic. Gamma-secretase inhibitors have failed in the clinic. Anti-amyloid antibodies have failed in the clinic. Everything has failed in the clinic. You can make excuses and find reasons – wrong patients, wrong compound, wrong pharmacokinetics, wrong dose, but after a while, you wonder if perhaps there might not be something a bit off with our understanding of the disease.”

What is perhaps even more worrying is that the supply of drug candidates currently going through the earlier stages of the processes, phase 1 and phase 2 trials, looks like it is starting to dry up. A 2016 review of the Alzheimer’s drug pipeline concludes that there are simply not enough drugs in phase 1 trials to give hope that new treatments are coming through in enough numbers to survive the massive attrition rate we’ve seen in Alzheimer’s drug candidates (for a drug to get to market by 2025, it would need to be in phase 1 trials now). One has to worry that we’re running out of ideas.

One way we can get a handle on the disease is to step back from the molecular mechanisms, and look again at the epidemiology. It’s clear that there are some well-defined risk factors for Alzheimer’s, which point towards some of the other things that might be going on, and suggest practical steps by which we can reduce the risks of dementia. One of these risk factors is type 2 diabetes, which according to data quoted by Taylor, increases the risk of dementia by 47%. Another is the presence of heart and vascular disease. The exact mechanisms at work here are uncertain, but on general principles these risk factors are not surprising. The human brain is a colossally energy-intensive organ, and anything that compromises the delivery of glucose and oxygen to its cells will place them under stress.

One other risk factor that Taylor does not discuss much is air pollution. There is growing evidence (summarised, for example, in a recent article in Science magazine) that poor air quality – especially the sub-micron particles produced in the exhausts of diesel engines – is implicated in Alzheimer’s disease. It’s been known for a while that environmental nanoparticles such as the ultra-fine particulates formed in combustion can lead to oxidative stress, inflammation and thus cardiovascular disease (I wrote about this here more than ten years ago – Ken Donaldson on nanoparticle toxicology). The relationship between pollution and cardiovascular disease would by itself indicate an indirect link to dementia, but there is in addition the possibility of a more direct link, if, as seems possible, some of these ultra fine particles can enter the brain directly.

There’s a fairly clear prescription, then, for individuals who wish to lower their risk of suffering from dementia in later life. They should eat well, keep their bodies and minds well exercised, and as much as possible breathe clean air. Since these are all beneficial for health in many other ways, it’s advice that’s worth taking, even if the links with dementia turn out to be less robust than they seem now.

But I think we should be cautious about putting the emphasis entirely on individuals taking responsibility for these actions to improve their own lifestyles. Public health measures and sensible regulation has a huge role to play, and are likely to be very cost-effective ways of reducing what otherwise will be a very expensive burden of disease. It’s not easy to eat well, especially if you’re poor; the food industry needs to take more responsibility for the products it sells. And urban pollution can be controlled by the kind of regulation that leads to innovation – I’m increasingly convinced that the driving force for accelerating the uptake of electric vehicles is going to be pressure from cities like London and Paris, Los Angeles and Beijing, as the health and economic costs of poor air quality become harder and harder to ignore.

Public health interventions and lifestyle improvements do hold out the hope of lowering the projected numbers of dementia sufferers from that figure of 2 million by 2051. But, for those who are diagnosed with dementia, we have to hope for the discovery of a breakthrough in treatment, a drug that does successfully slow or stop the progression of the disease. What needs to be done to bring that breakthrough closer?

Firstly, we should stop overstating the progress we’re making now, and stop hyping “breakthroughs” that really are nothing of the sort. The UK’s newspapers seem to be particularly guilty of doing this. Take, for example, this report from the Daily Telegraph, headlined “Breakthrough as scientists create first drug to halt Alzheimer’s disease”. Contrast that with the reporting in the New York Times of the very same result – “Alzheimer’s Drug LMTX Falters in Final Stage of Trials”. Newspapers shouldn’t be in the business of peddling false hope.

Another type of misguided optimism comes from Silicon Valley’s conviction that all is required to conquer death is a robust engineering “can-do” attitude. “Aubrey de Grey likes to compare the body to a car: a mechanic can fix an engine without necessarily understanding the physics of combustion”, a recent article on Silicon Valley’s quest to live for ever comments about the founder of the Valley’s SENS Foundation (the acronym is for Strategies for Engineered Negligible Senescence). Removing intercellular junk – amyloids – is point 6 in the SENS Foundation’s 7 point plan for eternal life.

But the lesson of several hundred failed drug trials is that we do need to understand the science of dementia more before we can be confident of treating it. “More research is needed” is about the lamest and most predictable thing a scientist can ever say, but in this case it is all too true. Where should our priorities lie?

It seems to me that hubristic mega-projects to simulate the human brain aren’t going to help at all here – they consider the brain at too high a level of abstraction to help disentangle the complex combination of molecular events that is disabling and killing nerve cells. We need to take into account the full complexity of the biological environments that nerve cells live in, surrounded and supported by glial cells like astrocytes, whose importance may have been underrated in the past. The new genomic approaches have already yielded powerful insights, and techniques for imaging the brain in living patients – magnetic resonance imaging and positron emission tomography – are improving all the time. We should certainly sustain the hope that new science will unlock new treatments for these terrible diseases, but we need to do the hard and expensive work to develop that science.

In my own university, the Sheffield Institute for Translational Neuroscience focuses on motor neurone disease/ALS and other neurodegenerative diseases, under the leadership of an outstanding clinician scientist, Professor Dame Pam Shaw. The University, together with Sheffield’s hospital, is currently raising money for a combined MRI/PET scanner to support this and other medical research work. I’m taking part in one fundraising event in a couple of months with many other university staff – attempting to walk 50 miles in less than 24 hours. You can support me in this through this JustGiving page.

Some books I read this year

Nick Lane – The Vital Question: energy, evolution and the origins of complex life

This is as good as everyone says it is – well written and compelling. I particularly appreciated the focus on energy flows as the driver for life, and the way the book gives the remarkable chemiosmotic hypothesis the prominence it deserves. The hypothesis Lane presents for the way life might have originated on earth is concrete and (to me) plausible, and what’s more important it suggests some experimental tests.

Todd Feinberg and Jon Mallet – The Ancient Origins of Consciousness: how the brain created experience

How many organisms can be said to be conscious, and when did consciousness emerge? Feinberg and Mallet’s answers are bold: all vertebrates are conscious, and in all probability so are cephalopods and some arthropods. In their view, consciousness evolved in the Cambrian explosion, associated with an arms race between predators and prey, and driven by the need to integrate different forms of long-distance sensory perceptions to produce a model of an organism’s environment. Even if you don’t accept the conclusion, you’ll learn a great deal about the evolution of nervous systems and the way sense perceptions are organised in many different kinds of organisms.

David Mackay – Information theory, inference, and learning algorithms

This is a text-book, so not particularly easy reading, but it’s a particularly rich and individual one. Continue reading “Some books I read this year”

Physical limits and diminishing returns of innovation

Are ideas getting harder to find? This question is asked in a preprint with this title by economists Bloom, Jones, Van Reenan and Webb, who attempt to quantify decreasing research productivity, showing for a number of fields that it is currently taking more researchers to achieve the same rate of progress. The paper is discussed in blogposts by Diane Coyle, who notes sceptically that the same thing was being said in 1983, and by Alex Tabarrok, who is more depressed.

Given the slowdown in productivity growth in the developed nations, which has steadily fallen from about 2.9% a year in 1970 to about 1.2% a year now, the notion is certainly plausible. But the attempt in the paper to quantify the decline is, I think, so crude as to be pretty much worthless – except inasmuch as it demonstrates how much growth economists need to understand the nature of technological innovation at a higher level of detail and particularity than is reflected in their current model-building efforts.

The first example is the familiar one of Moore’s law in semiconductors, where over many decades we have seen exponential growth in the number of transistors on an integrated circuit. The authors argue that to achieve this, the total number of researchers has increased by a factor of 25 or so since 1970 (this estimate is obtained by dividing the R&D expenditure of the major semiconductor companies by an average researcher wage). This is very broadly consistent with a corollary of Moore’s law (sometimes called Rock’s Law). This states that the capital cost of new generations of semiconductor fabs is also growing exponentially, with a four year doubling time; this cost is now in excess of $10 billion. A large part of this is actually the capitalised cost of the R&D that goes into developing the new tools and plant for each generation of ICs.

This increasing expense simply reflects the increasing difficulty of creating intricate, accurate and reproducible structures on ever-decreasing length scales. The problem isn’t that ideas are harder to find, it’s that as these length scales approach the atomic, many more problems arise, which need more effort to solve them. It’s the fundamental difficulty of the physics which leads to diminishing returns, and at some point a combination of the physical barriers and the economics will first slow and then stop further progress in miniaturising electronics using this technology.

For the second example, it isn’t so much physical barriers as biological ones that lead to diminishing returns, but the effect is the same. The green revolution – a period of big increases in the yields of key crops like wheat and maize – was driven by creating varieties able to use large amounts of artificial fertiliser and focus much more of their growing energies into the useful parts of the plant. Modern wheat, for example, has very short stems – but there’s a limit to how short you can make them, and that limit has probably been reached now. So R&D efforts are likely to be focused in other areas than pure yield increases – in disease resistance and tolerance of poorer growing conditions (the latter likely to be more important as climate changes, of course).

For their third example, the economists focus on medical progress. I’ve written before about the difficulties of the pharmaceutical industry, which has its own exponential law of progress. Unfortunately this goes the wrong way, with cost of developing new drugs increasing exponentially with time. The authors focus on cancer, and try to quantify declining returns by correlating research effort, as measured by papers published, with improvements in the five year cancer survival rate.

Again, I think the basic notion of diminishing returns is plausible, but this attempt to quantify it makes no sense at all. One obvious problem is that there are very long and variable lag times between when research is done, through the time it takes to test drugs and get them approved, to when they are in wide clinical use. To give one example, the ovarian cancer drug Lynparza was approved in December 2014, so it is conceivable that its effects might start to show up in 5 year survival rates some time after 2020. But the research it was based on was published in 2005. So the hope that there is any kind of simple “production function” that links an “input” of researchers’ time with an “output” of improved health, (or faster computers, or increased productivity) is a non-starter*.

The heart of the paper is the argument that an increasing number or researchers are producing fewer “ideas”. But what can they mean by “ideas”? As we all know, there are good ideas and bad ideas, profound ideas and trivial ideas, ideas that really do change the world, and ideas that make no difference to anyone. The “representative idea” assumed by the economists really isn’t helpful here, and rather than clarifying their concept in the first place, they redefine it to fit their equation, stating, with some circularity, that “ideas are proportional improvements in productivity”.

Most importantly, the value of an idea depends on the wider technological context in which it is developed. People claim that Leonardo da Vinci invented the helicopter, but even if he’d drawn an accurate blueprint of a Chinook, it would have had no value without all the supporting scientific understanding and technological innovations that were needed to make building a helicopter a practical proposition.

Clearly, at any given time there will be many ideas. Most of these will be unfruitful, but every now and again a combination of ideas will come together with a pre-existing technical infrastructure and a market demand to make a viable technology. For example, integrated circuits emerged in the 1960’s, when developments in materials science and manufacturing technology (especially photolithography and the planar process) made it possible to realise monolithic electronic circuits. Driven by customers with deep pockets and demanding requirements – the US defense industry – many refinements and innovations led to the first microprocessor in 1971.

Given a working technology and a strong market demand to create better versions of that technology, we can expect a period of incremental improvement, often very rapid. A constant rate of fractional improvement leads, of course, to exponential growth in quality, and that’s what we’ve seen over many decades for integrated circuits, giving us Moore’s law. The regularity of this improvement shouldn’t make us think it is automatic, though – it represents many brilliant innovations. Here, though, these innovations are coordinated and orchestrated so that in combination the overall rate of innovation is maintained. In a sense, the rate of innovation is set by the market, and the resources devoted to innovation increased to maintain that rate.

But exponential growth can never be sustained in a physical (or biological) system – some limit must always be reached. From about 1750 to 1850, the efficiency of steam engines increased exponentially, but despite many further technical improvements, this rate of progress slowed down in the second half of the 19th century – the second law of thermodynamics, through Carnot’s law, puts a fundamental upper limit on efficiency and as that limit is approached, diminishing returns set in. Likewise, the atomic scale of matter puts fundamental limits on how far the CMOS technology of our current integrated circuits can be pushed to smaller and smaller dimensions, and as those limits are approached, we expect to see the same sort of diminishing returns.

Economic growth didn’t come to an end in 1850 when the exponential rise in steam engine efficiencies started to level out, though. Entirely new technologies were developed – electricity, the chemical industry, the internal combustion engine powered motor car – which went through the same cycle of incremental improvement and eventual saturation.

The question we should be asking now is not whether the technologies that have driven economic growth in recent years have reached the point of diminishing returns – if they have, that is entirely natural and to be expected. It is whether enough entirely new technologies are now entering infancy, from which they can take-off with the sustained incremental growth that’s driven the economy in previous technology waves. Perhaps solar energy is in that state now; quantum computing perhaps hasn’t got there yet, as it isn’t clear how the basic idea can be implemented and whether there is a market to drive it.

What we do know is that growth is slowing, and has been doing so for some years. To this extent, this paper highlights a real problem. But a correct diagnosis of the ailment and design of useful policy prescriptions is going to demand a much more realistic understanding of how innovation works.

* if one insists on trying to build a model, the “production function” would need to be, not a simple function, but a functional, integrating functions representing different types of research and development effort over long periods of time.

What has science policy ever done for Barnsley?

Cambridge’s Centre for Science and Policy, where I am currently a visiting fellow, held a roundtable discussion yesterday on the challenges for science policy posed by today’s politics post-Brexit, post-Trump, introduced by Harvard’s Sheila Jasanoff and myself. This is an expanded and revised version of my opening remarks.

I’m currently commuting between Sheffield and Cambridge, so the contrast between the two cities is particularly obvious to me at the moment. Cambridgeshire is one of the few regions of the UK that is richer than the average, with a GVA per head of £27,203 (the skewness of the UK’s regional income distribution, arising from London’s extraordinary dominance, leads to the statistical oddness that most of the country is poorer than the average). Sheffield, on the other hand, is one of the less prosperous provincial cities, with a GVA per head of £19,958. But Sheffield doesn’t do so badly compared with some of the smaller towns and cities in its hinterland – Barnsley, Rotherham and Doncaster, whose GVA per head, at £15,707, isn’t much more than half of Cambridge’s prosperity.

This disparity in wealth is reflected in the politics. In the EU Referendum, Cambridge voted overwhelmingly – 74% – for Remain, while Barnsley, Rotherham and Doncaster voted almost as overwhelmingly – 68 or 69% – to Leave. The same story could be told of many other places in the country – Dudley, in the West Midlands, Teeside, in the Northeast, Blackburn, in the Northwest. This is not just a northern phenomenon, as shown by the example of Medway, in the Southeast. These are all places with poorly performing local economies, which have failed to recover from 1980’s deindustrialisation. They have poor levels of educational attainment, low participation in higher education, poor social mobility, low investment, low rates of business start-ups and growth – and they all voted overwhelmingly to leave the EU.

Somehow, all those earnest and passionate statements by eminent scientists and academics about the importance for science of remaining in the EU cut no ice in Barnsley. And why should they? We heard about the importance of EU funding for science, of the need to attract the best international scientists, of how proud we should be of the excellence of UK science. If Leave voters in Barnsley thought about science at all, they might be forgiven for thinking that science was to be regarded as an ornament to a prosperous society, when that prosperity was something from which they themselves were excluded.

Of course, there is another argument for science, which stresses its role in promoting economic growth. That is exemplified, of course, here in Cambridge, where it is easy to make the case that the city’s current obvious prosperity is strongly connected with its vibrant science-based economy. This is underpinned by substantial public sector research spending, which is then more than matched by a high level of private sector innovation and R&D, both from large firms and fast growing start-ups supported by a vibrant venture capital sector.

The figures for regional R&D bear this out. East Anglia has a total R&D expenditure of €1,388 per capita – it’s a highly R&D intensive economy. This is underpinned by the €472 per capita that’s spent in universities, government and non-profit laboratories, but is dominated by the €914 per capita spent in the private sector, directly creating wealth and economic growth. This is what a science-based knowledge economy looks like.

South Yorkshire looks very different. The total level of R&D is less than a fifth of the figure for East Anglia, at €244 per capita; and this is dominated by HE, which carries out R&D worth €156. Business R&D is less than 10% of the figure for East Anglia, at €80 per capita. This is an economy in which R&D plays very little role outside the university sector.

An interesting third contrast is Inner London, which is almost as R&D intensive overall as East Anglia, with a total R&D expenditure of €1,130 per capita. But here the figure is dominated not by the private sector, which does €323 per capita R&D, but by higher education and government, at €815 per capita. A visitor to London from Barnsley, getting off the train at St Pancras and marvelling at the architecture of the new Crick Institute, might well wonder whether this was indeed science as an ornament to a prosperous society.

To be fair, governments have begun to recognise these issues of regional disparities. I’d date the beginning of this line of thinking back to the immediate period after the financial crisis, when Peter Mandelson returned from Brussels to take charge of the new super-ministry of Business, Innovation and Skills. Newly enthused about the importance of industrial strategy, summarised in the 2009 document “New Industry, New Jobs”, he launched the notion that the economy needed to be “rebalanced”, both sectorally and regionally.

We’ve heard a lot about “rebalancing” since. At the aggregate level there has not been much success, but, to be fair, the remarkable resurgence of the automobile industry perhaps does owe something to the measures introduced by Mandelson’s BIS and InnovateUK, and continued by the Coalition, to support innovation, skills and supply chain development in this sector.

One area in which there was a definite discontinuity in policy on the arrival of the Coalition government in 2010 was the abrupt abolition of the Regional Development Agencies. They were replaced by “Local Enterprise Partnerships”, rather loosely structured confederations of local government representatives and private sector actors (including universities), with private sector chairs. One good point about LEPs was that they tended to be centred on City Regions, which make more sense as economic entities than the larger regions of the RDAs, though this did introduce some political complexity. Their bad points were that they had very few resources at their disposal, they had little analytical capacity, and their lack of political legitimacy made it difficult for them to set any real priorities.

Towards the end of the Coalition government, the idea of “place” made an unexpected and more explicit appearance in the science policy arena. A new science strategy appeared in December 2014 – “Our Plan for Growth: Science and Innovation” , which listed “place” as one of five underpinning principles (the others being “Excellence, Agility,Collaboration, and Openness”)

What was meant by “place” here was, like much else in this strategy, conceptually muddled. On the one hand, it seemed to be celebrating the clustering effect, by which so much science was concentrated in places like Cambridge and London. On the other hand, it seemed to be calling for science investment to be more explicitly linked with regional economic development.

It has been this second sense that has subsequently developed by the new, all Conservative government. The Science Minister, Jo Johnson, announced in a speech in Sheffield, the notion of “One Nation Science” – the idea that science should be the route for redressing the big differences in productivity between regions in the UK.

The key instrument for this “place agenda” was to be the “Science and Innovation Audits” – assessments of the areas of strength in science and innovation in the regions, and suggestions for where opportunities might exist to use and build on these to drive economic growth.

I have been closely involved in the preparation of the Science and Innovation Audit for Sheffield City Region and Lancashire, which was recently published by the government. I don’t want to go into detail about the Science and Innovation Audit process or its outcomes here – instead I want to pose the general question about what science policy can do for “left behind” regions like Barnsley or Blackburn.

It seems obvious to me that “trophy science” – science as an ornament for a prosperous society – will be no help. And while the model of Cambridge – a dynamic, science based economy, with private sector innovation, venture capital, and generous public funding for research attracting global talent – would be wonderful to emulate, that’s not going to happen. It arose in Cambridge from the convergence of many factors over many years, and there are not many places in the world where one can realistically expect this to happen again.

Instead, the focus needs to be much more on the translational research facilities that will attract inward investment from companies operating at the technology frontier, on mechanisms to diffuse the use of new technology quickly into existing businesses, on technical skills at all levels, not just the highest. The government must have a role, not just in supporting those research facilities and skills initiatives, but also in driving the demand for innovation, as the customer for the new technologies that will be needed to meet its strategic goals (for a concrete proposal of how this might work, see Stian Westlake’s blogpost “If not a DARPA, then what? The Advanced Systems Agency” ).

The question “What have you lot ever done for Barnsley” is one that I was directly asked, by Sir Steve Houghton, leader of Barnsley Council, just over a year ago, at the signing ceremony for the Sheffield City Region Devo Deal. I thought it was a good question, and I went to see him later with a considered answer. We have, in the Advanced Manufacturing Research Centre, a great translational engineering research facility that demonstrably attracts investment to the region and boosts the productivity of local firms. We have more than 400 apprentices in our training centre, most sponsored by local firms, not only getting a first class training in practical engineering (some delivered in collaboration with Barnsley College), but also with the prospect of a tailored path to higher education and beyond. We do schools outreach and public engagement, we work with Barnsley Hospital to develop new medical technologies that directly benefit his constituents. I’m sure he still thinks we can do more, but he shouldn’t think we don’t care any more.

The referendum was an object lesson in how little the strongly held views of scientists (and other members of the elite) influenced the voters in many parts of the country. For them, the interventions in the referendum campaign by leading scientists had about as much traction as the journal Nature’s endorsement of Hilary Clinton did across the Atlantic. I don’t think science policy has done anything like enough to answer the question, what have you lot done for Barnsley … or Merthyr Tydfil, or Dudley, or Medway, or any of the many other parts of the country that don’t share the prosperity of Cambridge, or Oxford, or London. That needs to change now.

The Rose of Temperaments

The colour of imaginary rain
falling forever on your old address…

Helen Mort

“The Rose of Temperaments” was a colour diagram devised by Goethe in the late 18th century, which matched colours with associated psychological and human characteristics. The artist Paul Evans has chosen this as a title for a project which forms part of Sheffield University’s Festival of the Mind; for it six poets have each written a sonnet associated with a colour. Poems by Angelina D’Roza and A.B. Jackson have already appeared on the project’s website; the other four will be published there over the next few weeks, including the piece by Helen Mort, from which my opening excerpt is taken.

Goethe’s theory of colour was a comprehensive cataloguing of the affective qualities of colours as humans perceive them, conceived in part as a reaction to the reductionism of Newton’s optics, much in the same spirit as Keats’s despair at the tendency of Newtonian philosophy to “unweave the rainbow”.

But if Newton’s aim was to remove the human dimension from the analysis of colour, he didn’t entirely succeed. In his book “Opticks”, he retains one important distinction, and leaves one unsolved mystery. He describes his famous experiments with a prism, which show that white light can be split into its component colours. But he checks himself to emphasise that when he talks about a ray of red light, he doesn’t mean that the ray itself is red; it has the property of producing the sensation of red when perceived by the eye.

The mystery is this – when we talk about “all the colours of the rainbow”, a moment’s thought tells us that a rainbow doesn’t actually contain all the colours there are. Newton recognised that the colour we now call magenta doesn’t appear in the rainbow – but it can be obtained by mixing two different colours of the rainbow, blue and red.

All this is made clear in the context of our modern physical theory of colour, which was developed in the 19th century, first by Thomas Young, and then in detail by James Clerk Maxwell. They showed, as most people know, that one can make any colour by mixing the three primary colours – red, green and blue – in different proportions.

Maxwell also deduced the reason for this – he realised that the human eye must comprise three separate types of light receptors, with different sensitivities across the visible spectrum, and that it is through the differential response of these different receptors to incident light that the brain constructs the sensation of colour. Colour, then, is not an intrinsic property of light itself, it is something that emerges from our human perception of light.

In the last few years, my group has been exploring the relationship between biology and colour from the other end, as it were. In our work on structural colour, we’ve been studying the microscopic structures that in beetle scales and bird feathers produce striking colours without pigments, through complex interference effects. We’re particularly interested in the non-iridescent colour effects that are produced by some structures that combine order and randomness in rather a striking way; our hope is to be able to understand the mechanism by which these structures form and then reproduce them in synthetic systems.

What we’ve come to realise as we speculate about the origin of these biological mechanisms is that to understand how these systems for producing biological coloration have evolved, we need to understand something about how different animals perceive colour, which is likely to be quite alien to our perceptions. Birds, for example, have not three different types of colour receptors, as humans do, but four. This means not just that birds can detect light outside human range of perception, but that the richness of their colour perception has an extra dimension.

Meanwhile, we’ve enjoyed having Paul Evans as an artist-in-residence in my group, working with my colleagues Dr Andy Parnell and Stephanie Burg on some of our x-ray scattering experiments. In addition to the poetry and colour project, Paul has put together an exhibition for Festival of the Mind, which can be seen in Sheffield’s Millennium Gallery for a week from 17th September. Paul, Andy and I will also be doing a talk about colour in art, physics and biology on September 20th, at 5 pm in the Spiegeltent, Barker’s Pool, Sheffield.

How big should the UK manufacturing sector be?

Last Friday I made a visit to HM Treasury, for a round table with the Productivity and Growth Team. My presentation (PDF of the slides here: The UK’s productivity problem – the role of innovation and R&D) covered, very quickly, the ground of my two SPERI papers, The UK’s innovation deficit and how to repair it, and Innovation, research and the UK’s productivity crisis.

The plot that provoked the most thought-provoking comments was this one, from a recent post, showing the contributions of different sectors to the UK’s productivity growth over the medium term. It’s tempting, on a superficial glance at this plot, to interpret it as saying the UK’s productivity problem is a simple consequence of its manufacturing and ICT sectors having been allowed to shrink too far. I think this conclusion is actually broadly correct; I suspect that the UK economy has suffered from a case of “Dutch disease” in which more productive sectors producing tradable goods have been squeezed out by the resource boom of North Sea oil and a financial services bubble. But I recognise that this conclusion does not follow quite as straightforwardly as one might at first think from this plot alone.


Multifactor productivity growth in selected UK sectors and subsectors since 1972. Data: EU KLEMS database, rebased to 1972=1.

The plot shows multi-factor productivity (aka total factor productivity) for various sectors and subsectors in the UK. Increases in total factor productivity are, in effect, that part of the increase in output that’s not accounted for by extra inputs of labour and capital; this is taken by economists to represent a measure of innovation, in some very general sense.

The central message is clear. In the medium run, over a 40 year period, the manufacturing sector has seen a consistent increase in total factor productivity, while in the service sectors total factor productivity increases have been at best small, and in some cases negative. The case of financial services, which form such a dominant part of the UK economy, is particularly interesting. Although the immediate years leading up to the financial crisis (2001-2008) showed a strong improvement in total factor productivity, which has since fallen back somewhat, over the whole period, since 1972, there has been no net growth in total factor productivity in financial services at all.

We can’t, however, simply conclude from these numbers that manufacturing has been the only driver of overall total factor productivity growth in the UK economy. Firstly, these broad sector classifications conceal a distribution of differently performing sub-sectors. Over this period the two leading sub-sectors are chemicals and telecommunications (the latter a sub-sector of information and communication).

Secondly, there have been significant shifts in the composition of the economy over this period, with the manufacturing sector shrinking in favour of services. My plot only shows rates of productivity growth, and not absolute levels; the overall productivity of the economy could improve if there is a shift from manufacturing to higher value services, even if productivity in those sectors subsequently grows less fast. Thus a shift from manufacturing to financial services could lead to an initial rise in overall productivity followed eventually by slower growth.

Moreover, within each sector and subsector there’s a wide dispersion of productivity performances, not just at sub-sector level, but at the level of individual firms. One interpretation of the rise in manufacturing productivity in the early 1980’s is that this reflects the disappearance of many lower performing firms during that period’s rapid de-industrialisation. On the other hand, a recent OECD report (The Future of Productivity, PDF) highlights what seems to be a global phenomenon since the financial crisis, in which a growing gap has opened up between the highest performing firms, in which productivity has continued to grow, and a long tail of less well performing firms whose productivity has stagnated.

I don’t think there’s any reason to believe that the UK manufacturing sector, though small, is particularly innovative or high performing as a whole. Some relatively old data from Hughes and Mina (PDF) shows that the overall R&D intensity of the UK’s manufacturing sector – expressed as ratio of manufacturing R&D to manufacturing gross value added – was lower than competitor nations and moving in the wrong direction.

This isn’t to say, of course, that there aren’t outstandingly innovative UK manufacturing operations. There clearly are; the issue is whether there are enough of them relative to the overall scale of the UK economy and whether their innovations and practises are diffusing fast enough to the long tail of manufacturing operations that are further from the technological frontier.

Steel and the dematerialisation (or not) of the world economy

The UK was the country in which mass production of steel began, so the current difficulties of the UK’s steel industry are highly politically charged. For many, it is unthinkable that a country with pretensions to be an economic power could lose its capacity to mass produce steel. To others, though, the steel industry is the epitome of the old heavy industry that has been superseded by the new, weightless economy of services, now supercharged by new digital technologies; we should not mourn its inevitable passing. So, is steel irrelevant, in our new, dematerialised economy? Here are two graphs which, on the face of it, seem to tell contradictory stories about the importance, or otherwise, of steel in modern economies.

The “steel intensity” of the economy of the USA – the amount of steel required to produce unit real GDP output (expressed as 1000’s of 2009 US dollars).

The first graph shows, for the example of the USA, the steel intensity of the economy, defined as the amount of steel required to produce unit GDP output. Continue reading “Steel and the dematerialisation (or not) of the world economy”

An international perspective on the productivity slowdown

Robert Gordon’s book “The Rise and Fall of American Growth” comprehensively describes the fall in productivity growth in the USA from its mid-twentieth century highs, as I discussed in my last post. Given the book’s exclusive focus on the USA, it’s interesting to set this in a more international context by looking at the data for other developed countries.

My first graph shows the labour productivity – defined as GDP per hour worked – for the G7 group of developed nations since 1970. This data, from the OECD, has been converted into constant US dollars at purchasing power parity; one should be aware that these currency conversions are not completely straightforward. Nonetheless, the picture is very clear. On this semi-logarithmic plot, a constant annual growth rate will produce a straight line. Instead, what we see is a systematic slow-down in the growth rate as we go from 1970 to the present day. I have fitted the data to a logistic function, which is a good representation of growth that starts out exponential and starts to saturate. In 1970, labour productivity in the G7 nations was growing at around 2.9% annually, but by the present day this had dropped to an annual growth rate of 1.2%.

G7 productivity

Labour productivity across the G7 group of nations – GDP per hour worked, currencies converted at purchasing power parity and expressed as constant 2010 US$. The fit (solid line) is a logistic function, corresponding to an annual growth rate of 2.9% in 1970, dropping to 1.2% in 2014. OECD data.

The second graph shows the evolution of labour productivity in a few developed countries as expressed as a fraction of this G7 average.

Productivity vs G7

Labour productivity relative to the G7 average. OECD data

Both at the beginning of the period, in 1970, and at the present day, the USA is the world’s productivity leader, the nation at the technology frontier. But the intervening period saw a long relative decline through the 1970s and ’80s, and a less dramatic recovery. The mirror image of this performance is shown by France and Germany, whose labour productivity performances have marched in step. France and Germany’s relative improvement in productivity performance took them ahead of the USA on this measure in the early 1990’s, but they have slipped back slightly in the last decade.

The UK, however, has been a persistent productivity laggard. Its low point was reached in 1975, when its productivity fell to 17% below the G7 average. After a bumpy performance in the 1980s, there was a slow improvement in the ’90s and ’00s, but much of this ground was lost in the financial crisis of 2008, leaving UK productivity around 13% below the G7 average, and 24% below the world’s productivity leader, the USA.

It is Italy, however, that has had the most dramatic evolution, beginning the period showing the same improvement as France and Germany, but then enduring a long decline, to end up with a productivity performance as poor as the UK’s.

Nobody knows anything (oil price edition)

Perhaps no single number is more important to the world economy than the price of oil. Modern economies depend on energy, and oil remains our largest energy source, supplying 31% of the world’s energy needs (another 21% comes from gas, whose price now moves quite closely with oil). And yet, huge movements in this number seemingly take experts by complete surprise.

OIl price predictions 2015
The price of oil in constant 2008 dollars, compared with the US Energy Information Authority predictions from 2000 and 2010. Data from the EIA.

My graph shows how the price of oil, corrected for inflation, has changed in the last 45 years. This is an updated version of the plot I blogged about five years ago; I included the set of predictions that the US Energy Information Administration had made in 2000. Just a few years later, these predictions were made nugatory by a large, unanticipated rise in oil prices. The predictions the EIA made ten years later, in 2010, had learnt one lesson – they included a much bigger spread between the high and low contingencies, amounting to more than a factor of three by the end of the decade. Now, only halfway into the period of the prediction, we see that the way oil prices turned out has so far managed both to exceed the high prediction and to undershoot the low one.

These gyrations mean that views that were conventional wisdom just a couple of years ago have to be rethought. Continue reading “Nobody knows anything (oil price edition)”

England’s early energy transition to fossil fuels: driven by process heat, not steam engines

Was the industrial revolution an energy revolution, in which the energy constraints of a traditional economy based on the power of the sun were broken by the discovery and exploitation of fossil fuel? Or was it an ideological revolution, in which the power of free thinking and free markets unlocked human ingenuity to power a growth in prosperity without limits? Those symbols of the industrial revolution – the steam engine, the coke-fuelled blast furnace – suggest the former, but the trend now amongst some economic historians is to downplay the role of coal and steam. What I think is correct is that the industrial revolution had already gathered much momentum before the steam engine made a significant impact. But coal was central to driving that early momentum; its use was already growing rapidly, but the dominant use of that coal was as a source of heat energy in a whole variety of industrial processes, not as a source of mechanical power. The foundations of the industrial revolution were laid in the diversity and productivity of those industries propelled by coal-fuelled process heat: the steam engine was the last thing that coal did for the industrial revolution, not the first.

What’s apparent, and perhaps surprising, from a plot of the relative contributions of coal and firewood to England’s energy economy, is how early in history the transition from biomass to fossil fuels took place. Using estimates quoted by Wrigley (a compelling advocate of the energy revolution position), we see that coal use in England grew roughly exponentially (with an annual growth rate of around 1.7%) between 1560 and 1800. The crossover between firewood and coal happened in the early seventeenth century, a date which is by world standards very early – for the world as a whole, Smil estimates this crossover only happened in the late 19th century.


Estimated consumption of coal and biomass fuels in England and Wales; data from Wrigley – Energy and the English Industrial Revolution.

So why did coal use become so important so early in England? Continue reading “England’s early energy transition to fossil fuels: driven by process heat, not steam engines”