Optimism – and realism – about solar energy

10 days ago I was fortunate enough to attend the Winton Symposium in Cambridge (where I’m currently spending some time as a visiting researcher in the Optoelectronics Group at the Cavendish Laboratory). The subject of the symposium was Harvesting the Energy of the Sun, and they had a stellar cast of international speakers addressing different aspects of the subject. This sums up some what I learnt from the day about the future potential for solar energy, together with some of my own reflections.

The growth of solar power – and the fall in its cost – over the last decade has been spectacular. Currently the world is producing about 10 billion standard 5 W silicon solar cells a year, at a current cost of €1.29 each; the unsubsidised cost of solar power in the sunnier parts of the world is heading down towards 5 cents a kWh, and at current capacity and demand levels, we should see 1 TW of solar power capacity in the world by 2030, compared to current estimates that installed capacity will reach about 300 GW at the end of this year (with 70 GW of that added in 2016).

But that’s not enough. The Paris Agreement – ratified so far by major emitters such as the USA, China, India, France and Germany (with the UK promising to ratify by the end of the year – but President-Elect Trump threatening to take the USA out) – commits countries to taking action to keep the average global temperature rise from pre-industrial times to below 2° C. Already the average temperature has risen by one degree or so, and currently the rate of increase is about 0.17° a decade. The point stressed by Sir David King was that it isn’t enough just to look at the consequences of the central prediction, worrying enough though they might be – one needs to insure against the very real risks of more extreme outcomes. What concerns governments in India and China, for example, is the risk of the successive failure of three rice harvests.

To achieve the Paris targets, the installed solar capacity we’re going to need by 2030 is estimated as being in the range 8-10 TW nominal; this would require 22-25% annual growth rate in manufacturing capacity. Continue reading “Optimism – and realism – about solar energy”

Is nuclear power obsolete?

After a summer hiccough, the new UK government has finally signed the deal with the French nuclear company EDF and its Chinese financial backers to build a new nuclear power station at Hinkley Point. My belief that this is a monumentally bad deal for the UK has not changed since I wrote about it three years ago, here: The UK’s nuclear new build: too expensive, too late.

The way the deal has been structured simultaneously maximises the cost to UK citizens while minimising the benefits that will accrue to UK industry. It’s the fallacy of the private finance initiative exposed by reductio ad absurdum; the government has signed up to a 35 year guarantee of excessively high prices for UK consumers, driven by the political desire to keep borrowing off the government’s balance sheet and maintain the fiction that nuclear power can be efficiently delivered by the private sector.

But there’s another argument against the Hinkley deal that I want to look at more critically – this is the idea that nuclear power is now obsolete, because with new technologies like wind, solar, electric cars and so on, we will, or soon will, be able to supply the 3.2 GW of low-carbon power that Hinkley promises at lower marginal cost. I think this marginal cost argument is profoundly wrong – given the need to make substantial progress decarbonising our energy system over the next thirty years, what’s important isn’t the marginal cost of the next GW of low-carbon power, it’s the total cost (and indeed feasibility) of replacing the 160 GW or so that represents our current fossil fuel based consumption (not to mention replacing the 9.5 GW existing nuclear capacity, fast approaching the end of its working lifetime).

To get a sense of the scale of the task, in 2015 the UK used about 2400 TWh of primary energy inputs. 83% of that was in the form of fossil fuels – roughly 800 TWh each of oil and gas, and a bit less than 300 TWh of coal. The 3.2 GW output of Hinkley would contribute 30 TWh pa at full capacity, while the combined output of all wind (onshore and offshore) and solar generation in 2015 was 48 TWh. So if we increased our solar and wind capacity by a bit more than half, we could replace Hinkley’s contribution; this is indeed probably doable, and given the stupidly expensive nature of the Hinkley deal, we might well indeed be able to do it more cheaply.

But that’s not all we need to do, not by a long way. If we are serious about decarbonising our energy supply (and we should be: for my reasons, please read this earlier post Climate change: what do we know for sure, and what is less certain?) we need to find, not 30 TWh a year, but more like 1500 TWh, of low carbon energy. It’s not one Hinkley Point we need, but 50 of them.

What can’t be stressed too often, in thinking about the UK’s energy supply, is that most of the energy we use (82% in 2015) is not in the form of electricity, but directly burnt oil and gas. Continue reading “Is nuclear power obsolete?”

The Rose of Temperaments

The colour of imaginary rain
falling forever on your old address…

Helen Mort

“The Rose of Temperaments” was a colour diagram devised by Goethe in the late 18th century, which matched colours with associated psychological and human characteristics. The artist Paul Evans has chosen this as a title for a project which forms part of Sheffield University’s Festival of the Mind; for it six poets have each written a sonnet associated with a colour. Poems by Angelina D’Roza and A.B. Jackson have already appeared on the project’s website; the other four will be published there over the next few weeks, including the piece by Helen Mort, from which my opening excerpt is taken.

Goethe’s theory of colour was a comprehensive cataloguing of the affective qualities of colours as humans perceive them, conceived in part as a reaction to the reductionism of Newton’s optics, much in the same spirit as Keats’s despair at the tendency of Newtonian philosophy to “unweave the rainbow”.

But if Newton’s aim was to remove the human dimension from the analysis of colour, he didn’t entirely succeed. In his book “Opticks”, he retains one important distinction, and leaves one unsolved mystery. He describes his famous experiments with a prism, which show that white light can be split into its component colours. But he checks himself to emphasise that when he talks about a ray of red light, he doesn’t mean that the ray itself is red; it has the property of producing the sensation of red when perceived by the eye.

The mystery is this – when we talk about “all the colours of the rainbow”, a moment’s thought tells us that a rainbow doesn’t actually contain all the colours there are. Newton recognised that the colour we now call magenta doesn’t appear in the rainbow – but it can be obtained by mixing two different colours of the rainbow, blue and red.

All this is made clear in the context of our modern physical theory of colour, which was developed in the 19th century, first by Thomas Young, and then in detail by James Clerk Maxwell. They showed, as most people know, that one can make any colour by mixing the three primary colours – red, green and blue – in different proportions.

Maxwell also deduced the reason for this – he realised that the human eye must comprise three separate types of light receptors, with different sensitivities across the visible spectrum, and that it is through the differential response of these different receptors to incident light that the brain constructs the sensation of colour. Colour, then, is not an intrinsic property of light itself, it is something that emerges from our human perception of light.

In the last few years, my group has been exploring the relationship between biology and colour from the other end, as it were. In our work on structural colour, we’ve been studying the microscopic structures that in beetle scales and bird feathers produce striking colours without pigments, through complex interference effects. We’re particularly interested in the non-iridescent colour effects that are produced by some structures that combine order and randomness in rather a striking way; our hope is to be able to understand the mechanism by which these structures form and then reproduce them in synthetic systems.

What we’ve come to realise as we speculate about the origin of these biological mechanisms is that to understand how these systems for producing biological coloration have evolved, we need to understand something about how different animals perceive colour, which is likely to be quite alien to our perceptions. Birds, for example, have not three different types of colour receptors, as humans do, but four. This means not just that birds can detect light outside human range of perception, but that the richness of their colour perception has an extra dimension.

Meanwhile, we’ve enjoyed having Paul Evans as an artist-in-residence in my group, working with my colleagues Dr Andy Parnell and Stephanie Burg on some of our x-ray scattering experiments. In addition to the poetry and colour project, Paul has put together an exhibition for Festival of the Mind, which can be seen in Sheffield’s Millennium Gallery for a week from 17th September. Paul, Andy and I will also be doing a talk about colour in art, physics and biology on September 20th, at 5 pm in the Spiegeltent, Barker’s Pool, Sheffield.

Your mind will not be uploaded – the shorter version

The idea that it’s going to be possible, in the foreseeable future, to “upload” a human mind to a computer is, I believe, quite wrong. The difficulties are both practical and conceptual, as I explained at length and in technical detail in my earlier post Your mind will not be uploaded.

I’ve now summarised the argument against mind uploading in much shorter and more readable form in a piece for The Conversation – a syndication site for academic writers. I’m pleased to see that the piece – Could we upload a brain to a computer – and should we even try? – has had more than 100,000 readers.

It’s led to another career milestone, one that I’m a little more ambivalent about – my first by-line on the Daily Mail website: Would you upload YOUR brain to a computer? Experts reveal what it would take to live forever digitally. There was also a translation into Spanish in the newspaper El Pais: ¿Podríamos cargar nuestro cerebro en un ordenador?, and into German in the online magazine Netzpiloten: Könnten wir ein Gehirn hochladen – und sollten wir es überhaupt versuchen?

How big should the UK manufacturing sector be?

Last Friday I made a visit to HM Treasury, for a round table with the Productivity and Growth Team. My presentation (PDF of the slides here: The UK’s productivity problem – the role of innovation and R&D) covered, very quickly, the ground of my two SPERI papers, The UK’s innovation deficit and how to repair it, and Innovation, research and the UK’s productivity crisis.

The plot that provoked the most thought-provoking comments was this one, from a recent post, showing the contributions of different sectors to the UK’s productivity growth over the medium term. It’s tempting, on a superficial glance at this plot, to interpret it as saying the UK’s productivity problem is a simple consequence of its manufacturing and ICT sectors having been allowed to shrink too far. I think this conclusion is actually broadly correct; I suspect that the UK economy has suffered from a case of “Dutch disease” in which more productive sectors producing tradable goods have been squeezed out by the resource boom of North Sea oil and a financial services bubble. But I recognise that this conclusion does not follow quite as straightforwardly as one might at first think from this plot alone.


Multifactor productivity growth in selected UK sectors and subsectors since 1972. Data: EU KLEMS database, rebased to 1972=1.

The plot shows multi-factor productivity (aka total factor productivity) for various sectors and subsectors in the UK. Increases in total factor productivity are, in effect, that part of the increase in output that’s not accounted for by extra inputs of labour and capital; this is taken by economists to represent a measure of innovation, in some very general sense.

The central message is clear. In the medium run, over a 40 year period, the manufacturing sector has seen a consistent increase in total factor productivity, while in the service sectors total factor productivity increases have been at best small, and in some cases negative. The case of financial services, which form such a dominant part of the UK economy, is particularly interesting. Although the immediate years leading up to the financial crisis (2001-2008) showed a strong improvement in total factor productivity, which has since fallen back somewhat, over the whole period, since 1972, there has been no net growth in total factor productivity in financial services at all.

We can’t, however, simply conclude from these numbers that manufacturing has been the only driver of overall total factor productivity growth in the UK economy. Firstly, these broad sector classifications conceal a distribution of differently performing sub-sectors. Over this period the two leading sub-sectors are chemicals and telecommunications (the latter a sub-sector of information and communication).

Secondly, there have been significant shifts in the composition of the economy over this period, with the manufacturing sector shrinking in favour of services. My plot only shows rates of productivity growth, and not absolute levels; the overall productivity of the economy could improve if there is a shift from manufacturing to higher value services, even if productivity in those sectors subsequently grows less fast. Thus a shift from manufacturing to financial services could lead to an initial rise in overall productivity followed eventually by slower growth.

Moreover, within each sector and subsector there’s a wide dispersion of productivity performances, not just at sub-sector level, but at the level of individual firms. One interpretation of the rise in manufacturing productivity in the early 1980’s is that this reflects the disappearance of many lower performing firms during that period’s rapid de-industrialisation. On the other hand, a recent OECD report (The Future of Productivity, PDF) highlights what seems to be a global phenomenon since the financial crisis, in which a growing gap has opened up between the highest performing firms, in which productivity has continued to grow, and a long tail of less well performing firms whose productivity has stagnated.

I don’t think there’s any reason to believe that the UK manufacturing sector, though small, is particularly innovative or high performing as a whole. Some relatively old data from Hughes and Mina (PDF) shows that the overall R&D intensity of the UK’s manufacturing sector – expressed as ratio of manufacturing R&D to manufacturing gross value added – was lower than competitor nations and moving in the wrong direction.

This isn’t to say, of course, that there aren’t outstandingly innovative UK manufacturing operations. There clearly are; the issue is whether there are enough of them relative to the overall scale of the UK economy and whether their innovations and practises are diffusing fast enough to the long tail of manufacturing operations that are further from the technological frontier.

The Utopia of the Machines

What would a society and economy look like if it was comprised, not of flesh and blood humans, but of disembodied emulations of human minds, some occupying robots of all speeds, shapes and sizes, others completely disembodied, running in simulations of virtual reality in city-size cloud computing facilities? This is the premise of a sustained exercise in futurology by the economist Robin Hanson, in his recently published book “The Age of Em”.

This vision is underpinned by Hanson’s confidence that economic growth is destined to accelerate, driven by technological progress in computer power and nanotechnology, together with his transhumanist conviction that technology will bring about irreversible and far-reaching changes in the human condition.

But his vision, radical though it may seem, is tempered by conservatism in two respects. Unlike many transhumanists and singularitarians, he is deeply sceptical about the possibilities of creating artificial general intelligence. This is interesting, given that Hanson’s technical expertise, before becoming an academic economist, was in the field of AI. Secondly, he is remarkably confident about the applicability of his current understanding of social science to the dramatically changed circumstances of his vision of the future, which implies a degree of constancy of human nature even in the face of dramatic changes in its material circumstances.

While Hanson may be sceptical about the possibility of hand-coded artificial general intelligence, he is not sceptical enough about the idea of mind uploading. I’ve described at length why I think, with some confidence, that it will not be possible any time soon to simulate the operation of a human brain with enough fidelity to constitute a meaningful emulation of the mind (in my e-book Against Transhumanism, v1.0, PDF 650 kB – the most relevant chapter of which appeared on this blog as “Your mind will not be uploaded”). Rather than summarising a long argument I’ve made elsewhere, here I’ll just pick out a few key points.

The first is to stress that the basic unit of computation of the brain is not the neuron, or even the synapse, it is the molecule. This means that Ray Kurzweil style back-of-the-envelope comparisons of the numbers of neurons in brains with the future numbers of transistors in microprocessors, as extrapolated from Moore’s Law, are wrong by multiple orders of magnitude.

The second concerns the question of the correct level of coarse-graining at which it is sufficient to simulate the brain’s operation. To faithfully simulate the operation of a microprocessor, one doesn’t need to worry about what its individual atoms and electrons are doing, because there is a clean separation of the underlying solid state physics from the operation of the higher level components of the circuits, the transistors. It is this separation of levels that allows us to model the operations of the circuit at a level of digital abstraction, in terms of ones and zeros and the operation of logic gates. This doesn’t happen by accident; it is a product of how we design integrated circuits. The brain, however, is not the product of design, it is the product of evolution, and for this reason we can’t expect there to be such a digital abstraction layer.

A final point that is worth stressing arises from Hanson’s description of his “ems” – mind emulations – as fully formed individual consciousnesses capable of learning and changing. This means that the process of “uploading” a consciousness from a flesh and blood brain to a digital simulation needs to involve not just creating a snapshot of the brain in molecular detail at the time of “uploading”, difficult enough though that is to envisage. Because in the operations of the brain, there are no firm distinctions between hardware and software – the processes of learning and development involve physical changes at both the molecular and physiological levels. So constructing our our emulation would not just need a map of the connectivity of neurons and synapses and details of their molecular configurations at the moment of “upload”; it would need to incorporate a molecularly accurate model of brain development and plasticity, a task on an even greater scale.

The other strong claim of Hanson’s book concerns the predictive power of current social science. His argument is that our understanding of human nature and the operations of human societies – based largely on economics and evolutionary psychology – is now sufficiently robust that, even given the radical changes implied by human minds becoming unshackled from their fleshly bodies, meaningful predictions can be made about the character of the resulting post-human societies. I don’t find this enormously convincing.

One issue is that Hanson often is simply unable to make firm predictions; this is commendably even-handed, but somewhat undermines his broader argument. For example, he asks whether “ems” will be more or less religious than fleshly humans. It depends, it would seem, on how much importance em society attaches to innovation. “So if the innovation effect is important enough, ems will be less religious; otherwise, they’ll be more religious.” I imagine he’s not able to rule out the possibility that their degree of religiosity remains about the same, either.

One argument that Hanson makes considerable play of is a dichotomy in value systems associated with forager communities and farmer communities. He argues that modern societies have moved away from the communitarian values of farming societies back towards the more individualistic values that he believes characterised forager societies. On this basis, having argued that, for many ems, farmer-like values will once again be more favoured, he predicts that these ems will tend to prioritise self-sacrifice, patriotism and hard-work.

This general line of argument has a long pedigree, essentially following the Marxist principle that it is a society’s mode of production which determines the superstructure of its institutions and values, with a more recent gloss from evolutionary psychology. The specific farmer/forager dichotomy will seem problematic to many on empirical grounds, though. How do we know what forager values actually were? Very few forager societies survived in any form into historical times, that handful that did may have been influenced by surrounding farmer communities, and what we know about their values is mediated by the biases of the anthropologists and ethnographers that recorded them. Most of what we know about foragers and hunter-gatherers necessarily comes from archaeology, which unavoidably deals in the material remains of vanished cultures. The archaeological study of prehistoric mentalities is fascinating and current, but methodologically difficult. The early tendency was to argue on the basis of analogies with historical forager communities, now recognised to be problematic for the reasons we’ve just seen, while the nature of what remains to be studied naturally and inevitably biases archaeologists towards materialist explanations.

Even if one accepts a correlation between a society’s mode of production and the character of its predominant social institutions and values, it’s not at all clear in which direction causality runs. There’s a fashionable (and to me pretty convincing) line of argument from economists like Daron Acemoglu that the quality of a society’s institutions is a prime determinant of their economic success. Meanwhile a dominant strain of thinking about the origins of the historical transition to an industrial economy puts ideals and values ahead of materialist explanations such as the availability of fossil fuels. In the latter argument I’m personally much more in the materialist camp, but I find it difficult to reject the idea that the economic base of a society and its values and institutions must co-develop, rather than one simply being determined by the other.

If the empirical underpinnings of the forager/farmer polarity are dubious, its applicability to Hanson’s hypothetical future seems even more difficult to justify. The question that has to arise here is why one should believe that the opposition is strictly binary. There’ve been many different ways in which economies have been organised in the past – the slave economies of antiquity, feudal systems, nomadic pastoralism, capitalist industrial societies, state socialist economies, and so on – and it’s easy to argue that each has been accompanied by its own particular package of institutions and values. Given the massive scale of change Hanson is anticipating in his post-human economy, it’s difficult to see why we shouldn’t expect the emergence an entirely new package of values, which to us would probably seem very alien, rather than a reversion to a set of values supposed to be appropriate to some previous historical state.

So how should one read “The Age of Em” – what genre of writing should it be ascribed to? In my opinion it doesn’t succeed as a straight work of non-fiction; the technical underpinnings of its premise are not credible, and the social science bases of its speculations, interesting though they are, are not, to my mind, robust enough to sustain the weight of argument erected on them. On the other hand, it is clearly not by itself science fiction. It’s certainly an impressive exercise in world-building, which, with the addition of plot and character, would have the potential to make a spectacular series of novels.

But it occurs to me that the book might best be thought of as a Utopia, in the sense of Thomas More’s original. Stylistically, one can see the relationship, in the travelogue-like tone of the writing, dispassionate but not at all unsympathetic to the inhabitants of the strange world he’s describing. And there’s an ambiguity about what a reader might take to be the purpose of the exercise. What is described is a world which to some readers, perhaps, might seem admirable and enviable. It’s a world in which the vicissitudes and distractions of the flesh are absent, and as described by Hanson it’s a competitive world, meritocratic on the basis of pure intellect and character. Since the basic social unit consists of multiple emulations of a successful individual, readers who identify themselves with one of the “uploads” can imagine themselves surrounded by people just like them.

Or perhaps we should read it, as some have read More’s Utopia, as a satire on current society. What, we might ask, would a description of an economy completely decoupled from the needs and desires of flesh-and-blood human beings tell us about our world today?

Even more debate on transhumanism

Following on from my short e-book “Against Transhumanism: the delusion of technological transcendence” (available free for download: Against Transhumanism, v1.0, PDF 650 kB), I have a long interview on the Singularity Weblog available as a podcast or video – “Richard Jones on Against Transhumanism”.

To quote my interviewer, Nikola Danaylov, “During our 75 min discussion with Prof. Richard Jones we cover a variety of interesting topics such as: his general work in nanotechnology, his book and blog on the topic; whether technological progress is accelerating or not; transhumanism, Ray Kurzweil and technological determinism; physics, Platonism and Frank J. Tipler‘s claim that “the singularity is inevitable”; the strange ideological routes of transhumanism; Eric Drexler’s vision of nanotechnology as reducing the material world to software; the over-representation of physicists on both sides of the transhumanism and AI debate; mind uploading and the importance of molecules as the most fundamental units of biological processing; Aubrey de Grey‘s quest for indefinite life extension; the importance of ethics and politics…”

For an earlier round-up of other reactions to the e-book, see here.

How cheaper steel makes nights out more expensive (and why that’s a good thing)

If you were a well-to-do Londoner in mid-to-late-18th century London, 1 shilling and sixpence would buy you a decent seat for a night out at the opera. Alternatively, if you were a London craftsman – a cutler or a tool-maker – the same money would allow you to buy in a kilogram of the finest Sheffield steel, made by Benjamin Huntsman’s revolutionary new crucible process. A reasonable estimate of inflation since 1770 or so would put the current value of one and six at about ten pounds. I don’t get to go out in London very much, and in any case opera is far from my favourite entertainment, but I strongly suspect that £10 today would barely buy you a gin and tonic in the Covent Garden bar, let alone a seat in that historic opera house. A hundred pounds might be more like it as a minimum for a night at the London opera now – and for that money you could buy not one, but a hundred kilograms of high quality tool-steel (though more likely from China than Sheffield).

This illustrates a phenomenon first identified by the economist William Baumol – in an economy in which one sector (typically some branch of manufacturing) sees rapid productivity gains, while another sector (typically a service sector – such as entertainment in this example) does not, then the product of the sector with low productivity will see an increase in its real price. Continue reading “How cheaper steel makes nights out more expensive (and why that’s a good thing)”

Innovation, research and the UK’s productivity crisis

My article on the UK’s productivity slowdown has now been published as a Sheffield Political Economy Research Institute Paper, and is available for download here. Here is its introduction/summary:

The UK is in the midst of an unprecedented peacetime slowdown in productivity growth, which comes on top of the nation’s long-standing productivity weakness compared to the USA, France and Germany. If this trend continues, UK living standards will continue to stagnate and the government’s ambition to eliminate the deficit will fail. Productivity growth is connected with innovation, in its broadest sense, so it is natural to explore the connection between the UK’s poor productivity performance and the low R&D intensity of its economy. More careful analyses of productivity look at the performance of individual sectors and allow some more detailed explanations of the productivity slowdown to be tested. The decline of North Sea oil and gas and the end of the financial services bubble have a special role in the UK’s poor recent performance; these do not explain all the problem, but they will provide a headwind that the economy will have to overcome over the coming years. In response, the UK government will need to take a more active role in procuring and driving technological innovation, particularly in areas where such innovation is needed to meet the strategic goals of the state. We need a new political economy of technological innovation.

SPERI-Paper-28-Innovation-research-and-the-UK-productivity-crisis cover

UK productivity – still no sign of recovery

The UK’s Office of National Statistics today released the latest figures for labour productivity, to the end of 2015. This shows that the apparent recovery in productivity that seemed to be getting going half way through last year was yet another false dawn; productivity has flat-lined since the financial crisis, with the Q4 2015 value actually below the peak achieved in 2007. This performance puts us on track for the worst decade in a century. Poor productivity growth translates directly into stagnating living standards and lower tax revenues for the government, meaning that, despite austerity, all their efforts to eliminate the fiscal deficit will be in vain.

As this is perhaps the most serious economic problem currently facing the UK, it’s good to see the issue becoming more widely discussed. It’s an issue I’ve been thinking about for some time; my post on the political implications of the productivity slowdown, as revealed by this March’s budget and its aftermath, is here: The political fallout of the UK’s productivity problem. Last summer, I wrote a series of blogposts exploring the origins of this productivity slowdown. I’ve written a draft paper based on a substantially revised and updated version of those posts:

Innovation, research, and the UK’s productivity crisis (1.4 MB PDF).

quarterly productivity Q4 2015

Labour productivity: output per hour. ONS Labour Productivity Dataset, 7 April 2016.