What has science policy ever done for Barnsley?

Cambridge’s Centre for Science and Policy, where I am currently a visiting fellow, held a roundtable discussion yesterday on the challenges for science policy posed by today’s politics post-Brexit, post-Trump, introduced by Harvard’s Sheila Jasanoff and myself. This is an expanded and revised version of my opening remarks.

I’m currently commuting between Sheffield and Cambridge, so the contrast between the two cities is particularly obvious to me at the moment. Cambridgeshire is one of the few regions of the UK that is richer than the average, with a GVA per head of £27,203 (the skewness of the UK’s regional income distribution, arising from London’s extraordinary dominance, leads to the statistical oddness that most of the country is poorer than the average). Sheffield, on the other hand, is one of the less prosperous provincial cities, with a GVA per head of £19,958. But Sheffield doesn’t do so badly compared with some of the smaller towns and cities in its hinterland – Barnsley, Rotherham and Doncaster, whose GVA per head, at £15,707, isn’t much more than half of Cambridge’s prosperity.

This disparity in wealth is reflected in the politics. In the EU Referendum, Cambridge voted overwhelmingly – 74% – for Remain, while Barnsley, Rotherham and Doncaster voted almost as overwhelmingly – 68 or 69% – to Leave. The same story could be told of many other places in the country – Dudley, in the West Midlands, Teeside, in the Northeast, Blackburn, in the Northwest. This is not just a northern phenomenon, as shown by the example of Medway, in the Southeast. These are all places with poorly performing local economies, which have failed to recover from 1980’s deindustrialisation. They have poor levels of educational attainment, low participation in higher education, poor social mobility, low investment, low rates of business start-ups and growth – and they all voted overwhelmingly to leave the EU.

Somehow, all those earnest and passionate statements by eminent scientists and academics about the importance for science of remaining in the EU cut no ice in Barnsley. And why should they? We heard about the importance of EU funding for science, of the need to attract the best international scientists, of how proud we should be of the excellence of UK science. If Leave voters in Barnsley thought about science at all, they might be forgiven for thinking that science was to be regarded as an ornament to a prosperous society, when that prosperity was something from which they themselves were excluded.

Of course, there is another argument for science, which stresses its role in promoting economic growth. That is exemplified, of course, here in Cambridge, where it is easy to make the case that the city’s current obvious prosperity is strongly connected with its vibrant science-based economy. This is underpinned by substantial public sector research spending, which is then more than matched by a high level of private sector innovation and R&D, both from large firms and fast growing start-ups supported by a vibrant venture capital sector.

The figures for regional R&D bear this out. East Anglia has a total R&D expenditure of €1,388 per capita – it’s a highly R&D intensive economy. This is underpinned by the €472 per capita that’s spent in universities, government and non-profit laboratories, but is dominated by the €914 per capita spent in the private sector, directly creating wealth and economic growth. This is what a science-based knowledge economy looks like.

South Yorkshire looks very different. The total level of R&D is less than a fifth of the figure for East Anglia, at €244 per capita; and this is dominated by HE, which carries out R&D worth €156. Business R&D is less than 10% of the figure for East Anglia, at €80 per capita. This is an economy in which R&D plays very little role outside the university sector.

An interesting third contrast is Inner London, which is almost as R&D intensive overall as East Anglia, with a total R&D expenditure of €1,130 per capita. But here the figure is dominated not by the private sector, which does €323 per capita R&D, but by higher education and government, at €815 per capita. A visitor to London from Barnsley, getting off the train at St Pancras and marvelling at the architecture of the new Crick Institute, might well wonder whether this was indeed science as an ornament to a prosperous society.

To be fair, governments have begun to recognise these issues of regional disparities. I’d date the beginning of this line of thinking back to the immediate period after the financial crisis, when Peter Mandelson returned from Brussels to take charge of the new super-ministry of Business, Innovation and Skills. Newly enthused about the importance of industrial strategy, summarised in the 2009 document “New Industry, New Jobs”, he launched the notion that the economy needed to be “rebalanced”, both sectorally and regionally.

We’ve heard a lot about “rebalancing” since. At the aggregate level there has not been much success, but, to be fair, the remarkable resurgence of the automobile industry perhaps does owe something to the measures introduced by Mandelson’s BIS and InnovateUK, and continued by the Coalition, to support innovation, skills and supply chain development in this sector.

One area in which there was a definite discontinuity in policy on the arrival of the Coalition government in 2010 was the abrupt abolition of the Regional Development Agencies. They were replaced by “Local Enterprise Partnerships”, rather loosely structured confederations of local government representatives and private sector actors (including universities), with private sector chairs. One good point about LEPs was that they tended to be centred on City Regions, which make more sense as economic entities than the larger regions of the RDAs, though this did introduce some political complexity. Their bad points were that they had very few resources at their disposal, they had little analytical capacity, and their lack of political legitimacy made it difficult for them to set any real priorities.

Towards the end of the Coalition government, the idea of “place” made an unexpected and more explicit appearance in the science policy arena. A new science strategy appeared in December 2014 – “Our Plan for Growth: Science and Innovation” , which listed “place” as one of five underpinning principles (the others being “Excellence, Agility,Collaboration, and Openness”)

What was meant by “place” here was, like much else in this strategy, conceptually muddled. On the one hand, it seemed to be celebrating the clustering effect, by which so much science was concentrated in places like Cambridge and London. On the other hand, it seemed to be calling for science investment to be more explicitly linked with regional economic development.

It has been this second sense that has subsequently developed by the new, all Conservative government. The Science Minister, Jo Johnson, announced in a speech in Sheffield, the notion of “One Nation Science” – the idea that science should be the route for redressing the big differences in productivity between regions in the UK.

The key instrument for this “place agenda” was to be the “Science and Innovation Audits” – assessments of the areas of strength in science and innovation in the regions, and suggestions for where opportunities might exist to use and build on these to drive economic growth.

I have been closely involved in the preparation of the Science and Innovation Audit for Sheffield City Region and Lancashire, which was recently published by the government. I don’t want to go into detail about the Science and Innovation Audit process or its outcomes here – instead I want to pose the general question about what science policy can do for “left behind” regions like Barnsley or Blackburn.

It seems obvious to me that “trophy science” – science as an ornament for a prosperous society – will be no help. And while the model of Cambridge – a dynamic, science based economy, with private sector innovation, venture capital, and generous public funding for research attracting global talent – would be wonderful to emulate, that’s not going to happen. It arose in Cambridge from the convergence of many factors over many years, and there are not many places in the world where one can realistically expect this to happen again.

Instead, the focus needs to be much more on the translational research facilities that will attract inward investment from companies operating at the technology frontier, on mechanisms to diffuse the use of new technology quickly into existing businesses, on technical skills at all levels, not just the highest. The government must have a role, not just in supporting those research facilities and skills initiatives, but also in driving the demand for innovation, as the customer for the new technologies that will be needed to meet its strategic goals (for a concrete proposal of how this might work, see Stian Westlake’s blogpost “If not a DARPA, then what? The Advanced Systems Agency” ).

The question “What have you lot ever done for Barnsley” is one that I was directly asked, by Sir Steve Houghton, leader of Barnsley Council, just over a year ago, at the signing ceremony for the Sheffield City Region Devo Deal. I thought it was a good question, and I went to see him later with a considered answer. We have, in the Advanced Manufacturing Research Centre, a great translational engineering research facility that demonstrably attracts investment to the region and boosts the productivity of local firms. We have more than 400 apprentices in our training centre, most sponsored by local firms, not only getting a first class training in practical engineering (some delivered in collaboration with Barnsley College), but also with the prospect of a tailored path to higher education and beyond. We do schools outreach and public engagement, we work with Barnsley Hospital to develop new medical technologies that directly benefit his constituents. I’m sure he still thinks we can do more, but he shouldn’t think we don’t care any more.

The referendum was an object lesson in how little the strongly held views of scientists (and other members of the elite) influenced the voters in many parts of the country. For them, the interventions in the referendum campaign by leading scientists had about as much traction as the journal Nature’s endorsement of Hilary Clinton did across the Atlantic. I don’t think science policy has done anything like enough to answer the question, what have you lot done for Barnsley … or Merthyr Tydfil, or Dudley, or Medway, or any of the many other parts of the country that don’t share the prosperity of Cambridge, or Oxford, or London. That needs to change now.

Optimism – and realism – about solar energy

10 days ago I was fortunate enough to attend the Winton Symposium in Cambridge (where I’m currently spending some time as a visiting researcher in the Optoelectronics Group at the Cavendish Laboratory). The subject of the symposium was Harvesting the Energy of the Sun, and they had a stellar cast of international speakers addressing different aspects of the subject. This sums up some what I learnt from the day about the future potential for solar energy, together with some of my own reflections.

The growth of solar power – and the fall in its cost – over the last decade has been spectacular. Currently the world is producing about 10 billion standard 5 W silicon solar cells a year, at a current cost of €1.29 each; the unsubsidised cost of solar power in the sunnier parts of the world is heading down towards 5 cents a kWh, and at current capacity and demand levels, we should see 1 TW of solar power capacity in the world by 2030, compared to current estimates that installed capacity will reach about 300 GW at the end of this year (with 70 GW of that added in 2016).

But that’s not enough. The Paris Agreement – ratified so far by major emitters such as the USA, China, India, France and Germany (with the UK promising to ratify by the end of the year – but President-Elect Trump threatening to take the USA out) – commits countries to taking action to keep the average global temperature rise from pre-industrial times to below 2° C. Already the average temperature has risen by one degree or so, and currently the rate of increase is about 0.17° a decade. The point stressed by Sir David King was that it isn’t enough just to look at the consequences of the central prediction, worrying enough though they might be – one needs to insure against the very real risks of more extreme outcomes. What concerns governments in India and China, for example, is the risk of the successive failure of three rice harvests.

To achieve the Paris targets, the installed solar capacity we’re going to need by 2030 is estimated as being in the range 8-10 TW nominal; this would require 22-25% annual growth rate in manufacturing capacity. Continue reading “Optimism – and realism – about solar energy”

Is nuclear power obsolete?

After a summer hiccough, the new UK government has finally signed the deal with the French nuclear company EDF and its Chinese financial backers to build a new nuclear power station at Hinkley Point. My belief that this is a monumentally bad deal for the UK has not changed since I wrote about it three years ago, here: The UK’s nuclear new build: too expensive, too late.

The way the deal has been structured simultaneously maximises the cost to UK citizens while minimising the benefits that will accrue to UK industry. It’s the fallacy of the private finance initiative exposed by reductio ad absurdum; the government has signed up to a 35 year guarantee of excessively high prices for UK consumers, driven by the political desire to keep borrowing off the government’s balance sheet and maintain the fiction that nuclear power can be efficiently delivered by the private sector.

But there’s another argument against the Hinkley deal that I want to look at more critically – this is the idea that nuclear power is now obsolete, because with new technologies like wind, solar, electric cars and so on, we will, or soon will, be able to supply the 3.2 GW of low-carbon power that Hinkley promises at lower marginal cost. I think this marginal cost argument is profoundly wrong – given the need to make substantial progress decarbonising our energy system over the next thirty years, what’s important isn’t the marginal cost of the next GW of low-carbon power, it’s the total cost (and indeed feasibility) of replacing the 160 GW or so that represents our current fossil fuel based consumption (not to mention replacing the 9.5 GW existing nuclear capacity, fast approaching the end of its working lifetime).

To get a sense of the scale of the task, in 2015 the UK used about 2400 TWh of primary energy inputs. 83% of that was in the form of fossil fuels – roughly 800 TWh each of oil and gas, and a bit less than 300 TWh of coal. The 3.2 GW output of Hinkley would contribute 30 TWh pa at full capacity, while the combined output of all wind (onshore and offshore) and solar generation in 2015 was 48 TWh. So if we increased our solar and wind capacity by a bit more than half, we could replace Hinkley’s contribution; this is indeed probably doable, and given the stupidly expensive nature of the Hinkley deal, we might well indeed be able to do it more cheaply.

But that’s not all we need to do, not by a long way. If we are serious about decarbonising our energy supply (and we should be: for my reasons, please read this earlier post Climate change: what do we know for sure, and what is less certain?) we need to find, not 30 TWh a year, but more like 1500 TWh, of low carbon energy. It’s not one Hinkley Point we need, but 50 of them.

What can’t be stressed too often, in thinking about the UK’s energy supply, is that most of the energy we use (82% in 2015) is not in the form of electricity, but directly burnt oil and gas. Continue reading “Is nuclear power obsolete?”

The Rose of Temperaments

The colour of imaginary rain
falling forever on your old address…

Helen Mort

“The Rose of Temperaments” was a colour diagram devised by Goethe in the late 18th century, which matched colours with associated psychological and human characteristics. The artist Paul Evans has chosen this as a title for a project which forms part of Sheffield University’s Festival of the Mind; for it six poets have each written a sonnet associated with a colour. Poems by Angelina D’Roza and A.B. Jackson have already appeared on the project’s website; the other four will be published there over the next few weeks, including the piece by Helen Mort, from which my opening excerpt is taken.

Goethe’s theory of colour was a comprehensive cataloguing of the affective qualities of colours as humans perceive them, conceived in part as a reaction to the reductionism of Newton’s optics, much in the same spirit as Keats’s despair at the tendency of Newtonian philosophy to “unweave the rainbow”.

But if Newton’s aim was to remove the human dimension from the analysis of colour, he didn’t entirely succeed. In his book “Opticks”, he retains one important distinction, and leaves one unsolved mystery. He describes his famous experiments with a prism, which show that white light can be split into its component colours. But he checks himself to emphasise that when he talks about a ray of red light, he doesn’t mean that the ray itself is red; it has the property of producing the sensation of red when perceived by the eye.

The mystery is this – when we talk about “all the colours of the rainbow”, a moment’s thought tells us that a rainbow doesn’t actually contain all the colours there are. Newton recognised that the colour we now call magenta doesn’t appear in the rainbow – but it can be obtained by mixing two different colours of the rainbow, blue and red.

All this is made clear in the context of our modern physical theory of colour, which was developed in the 19th century, first by Thomas Young, and then in detail by James Clerk Maxwell. They showed, as most people know, that one can make any colour by mixing the three primary colours – red, green and blue – in different proportions.

Maxwell also deduced the reason for this – he realised that the human eye must comprise three separate types of light receptors, with different sensitivities across the visible spectrum, and that it is through the differential response of these different receptors to incident light that the brain constructs the sensation of colour. Colour, then, is not an intrinsic property of light itself, it is something that emerges from our human perception of light.

In the last few years, my group has been exploring the relationship between biology and colour from the other end, as it were. In our work on structural colour, we’ve been studying the microscopic structures that in beetle scales and bird feathers produce striking colours without pigments, through complex interference effects. We’re particularly interested in the non-iridescent colour effects that are produced by some structures that combine order and randomness in rather a striking way; our hope is to be able to understand the mechanism by which these structures form and then reproduce them in synthetic systems.

What we’ve come to realise as we speculate about the origin of these biological mechanisms is that to understand how these systems for producing biological coloration have evolved, we need to understand something about how different animals perceive colour, which is likely to be quite alien to our perceptions. Birds, for example, have not three different types of colour receptors, as humans do, but four. This means not just that birds can detect light outside human range of perception, but that the richness of their colour perception has an extra dimension.

Meanwhile, we’ve enjoyed having Paul Evans as an artist-in-residence in my group, working with my colleagues Dr Andy Parnell and Stephanie Burg on some of our x-ray scattering experiments. In addition to the poetry and colour project, Paul has put together an exhibition for Festival of the Mind, which can be seen in Sheffield’s Millennium Gallery for a week from 17th September. Paul, Andy and I will also be doing a talk about colour in art, physics and biology on September 20th, at 5 pm in the Spiegeltent, Barker’s Pool, Sheffield.

Your mind will not be uploaded – the shorter version

The idea that it’s going to be possible, in the foreseeable future, to “upload” a human mind to a computer is, I believe, quite wrong. The difficulties are both practical and conceptual, as I explained at length and in technical detail in my earlier post Your mind will not be uploaded.

I’ve now summarised the argument against mind uploading in much shorter and more readable form in a piece for The Conversation – a syndication site for academic writers. I’m pleased to see that the piece – Could we upload a brain to a computer – and should we even try? – has had more than 100,000 readers.

It’s led to another career milestone, one that I’m a little more ambivalent about – my first by-line on the Daily Mail website: Would you upload YOUR brain to a computer? Experts reveal what it would take to live forever digitally. There was also a translation into Spanish in the newspaper El Pais: ¿Podríamos cargar nuestro cerebro en un ordenador?, and into German in the online magazine Netzpiloten: Könnten wir ein Gehirn hochladen – und sollten wir es überhaupt versuchen?

How big should the UK manufacturing sector be?

Last Friday I made a visit to HM Treasury, for a round table with the Productivity and Growth Team. My presentation (PDF of the slides here: The UK’s productivity problem – the role of innovation and R&D) covered, very quickly, the ground of my two SPERI papers, The UK’s innovation deficit and how to repair it, and Innovation, research and the UK’s productivity crisis.

The plot that provoked the most thought-provoking comments was this one, from a recent post, showing the contributions of different sectors to the UK’s productivity growth over the medium term. It’s tempting, on a superficial glance at this plot, to interpret it as saying the UK’s productivity problem is a simple consequence of its manufacturing and ICT sectors having been allowed to shrink too far. I think this conclusion is actually broadly correct; I suspect that the UK economy has suffered from a case of “Dutch disease” in which more productive sectors producing tradable goods have been squeezed out by the resource boom of North Sea oil and a financial services bubble. But I recognise that this conclusion does not follow quite as straightforwardly as one might at first think from this plot alone.

UKSectoralMFP

Multifactor productivity growth in selected UK sectors and subsectors since 1972. Data: EU KLEMS database, rebased to 1972=1.

The plot shows multi-factor productivity (aka total factor productivity) for various sectors and subsectors in the UK. Increases in total factor productivity are, in effect, that part of the increase in output that’s not accounted for by extra inputs of labour and capital; this is taken by economists to represent a measure of innovation, in some very general sense.

The central message is clear. In the medium run, over a 40 year period, the manufacturing sector has seen a consistent increase in total factor productivity, while in the service sectors total factor productivity increases have been at best small, and in some cases negative. The case of financial services, which form such a dominant part of the UK economy, is particularly interesting. Although the immediate years leading up to the financial crisis (2001-2008) showed a strong improvement in total factor productivity, which has since fallen back somewhat, over the whole period, since 1972, there has been no net growth in total factor productivity in financial services at all.

We can’t, however, simply conclude from these numbers that manufacturing has been the only driver of overall total factor productivity growth in the UK economy. Firstly, these broad sector classifications conceal a distribution of differently performing sub-sectors. Over this period the two leading sub-sectors are chemicals and telecommunications (the latter a sub-sector of information and communication).

Secondly, there have been significant shifts in the composition of the economy over this period, with the manufacturing sector shrinking in favour of services. My plot only shows rates of productivity growth, and not absolute levels; the overall productivity of the economy could improve if there is a shift from manufacturing to higher value services, even if productivity in those sectors subsequently grows less fast. Thus a shift from manufacturing to financial services could lead to an initial rise in overall productivity followed eventually by slower growth.

Moreover, within each sector and subsector there’s a wide dispersion of productivity performances, not just at sub-sector level, but at the level of individual firms. One interpretation of the rise in manufacturing productivity in the early 1980’s is that this reflects the disappearance of many lower performing firms during that period’s rapid de-industrialisation. On the other hand, a recent OECD report (The Future of Productivity, PDF) highlights what seems to be a global phenomenon since the financial crisis, in which a growing gap has opened up between the highest performing firms, in which productivity has continued to grow, and a long tail of less well performing firms whose productivity has stagnated.

I don’t think there’s any reason to believe that the UK manufacturing sector, though small, is particularly innovative or high performing as a whole. Some relatively old data from Hughes and Mina (PDF) shows that the overall R&D intensity of the UK’s manufacturing sector – expressed as ratio of manufacturing R&D to manufacturing gross value added – was lower than competitor nations and moving in the wrong direction.

This isn’t to say, of course, that there aren’t outstandingly innovative UK manufacturing operations. There clearly are; the issue is whether there are enough of them relative to the overall scale of the UK economy and whether their innovations and practises are diffusing fast enough to the long tail of manufacturing operations that are further from the technological frontier.

The Utopia of the Machines

What would a society and economy look like if it was comprised, not of flesh and blood humans, but of disembodied emulations of human minds, some occupying robots of all speeds, shapes and sizes, others completely disembodied, running in simulations of virtual reality in city-size cloud computing facilities? This is the premise of a sustained exercise in futurology by the economist Robin Hanson, in his recently published book “The Age of Em”.

This vision is underpinned by Hanson’s confidence that economic growth is destined to accelerate, driven by technological progress in computer power and nanotechnology, together with his transhumanist conviction that technology will bring about irreversible and far-reaching changes in the human condition.

But his vision, radical though it may seem, is tempered by conservatism in two respects. Unlike many transhumanists and singularitarians, he is deeply sceptical about the possibilities of creating artificial general intelligence. This is interesting, given that Hanson’s technical expertise, before becoming an academic economist, was in the field of AI. Secondly, he is remarkably confident about the applicability of his current understanding of social science to the dramatically changed circumstances of his vision of the future, which implies a degree of constancy of human nature even in the face of dramatic changes in its material circumstances.

While Hanson may be sceptical about the possibility of hand-coded artificial general intelligence, he is not sceptical enough about the idea of mind uploading. I’ve described at length why I think, with some confidence, that it will not be possible any time soon to simulate the operation of a human brain with enough fidelity to constitute a meaningful emulation of the mind (in my e-book Against Transhumanism, v1.0, PDF 650 kB – the most relevant chapter of which appeared on this blog as “Your mind will not be uploaded”). Rather than summarising a long argument I’ve made elsewhere, here I’ll just pick out a few key points.

The first is to stress that the basic unit of computation of the brain is not the neuron, or even the synapse, it is the molecule. This means that Ray Kurzweil style back-of-the-envelope comparisons of the numbers of neurons in brains with the future numbers of transistors in microprocessors, as extrapolated from Moore’s Law, are wrong by multiple orders of magnitude.

The second concerns the question of the correct level of coarse-graining at which it is sufficient to simulate the brain’s operation. To faithfully simulate the operation of a microprocessor, one doesn’t need to worry about what its individual atoms and electrons are doing, because there is a clean separation of the underlying solid state physics from the operation of the higher level components of the circuits, the transistors. It is this separation of levels that allows us to model the operations of the circuit at a level of digital abstraction, in terms of ones and zeros and the operation of logic gates. This doesn’t happen by accident; it is a product of how we design integrated circuits. The brain, however, is not the product of design, it is the product of evolution, and for this reason we can’t expect there to be such a digital abstraction layer.

A final point that is worth stressing arises from Hanson’s description of his “ems” – mind emulations – as fully formed individual consciousnesses capable of learning and changing. This means that the process of “uploading” a consciousness from a flesh and blood brain to a digital simulation needs to involve not just creating a snapshot of the brain in molecular detail at the time of “uploading”, difficult enough though that is to envisage. Because in the operations of the brain, there are no firm distinctions between hardware and software – the processes of learning and development involve physical changes at both the molecular and physiological levels. So constructing our our emulation would not just need a map of the connectivity of neurons and synapses and details of their molecular configurations at the moment of “upload”; it would need to incorporate a molecularly accurate model of brain development and plasticity, a task on an even greater scale.

The other strong claim of Hanson’s book concerns the predictive power of current social science. His argument is that our understanding of human nature and the operations of human societies – based largely on economics and evolutionary psychology – is now sufficiently robust that, even given the radical changes implied by human minds becoming unshackled from their fleshly bodies, meaningful predictions can be made about the character of the resulting post-human societies. I don’t find this enormously convincing.

One issue is that Hanson often is simply unable to make firm predictions; this is commendably even-handed, but somewhat undermines his broader argument. For example, he asks whether “ems” will be more or less religious than fleshly humans. It depends, it would seem, on how much importance em society attaches to innovation. “So if the innovation effect is important enough, ems will be less religious; otherwise, they’ll be more religious.” I imagine he’s not able to rule out the possibility that their degree of religiosity remains about the same, either.

One argument that Hanson makes considerable play of is a dichotomy in value systems associated with forager communities and farmer communities. He argues that modern societies have moved away from the communitarian values of farming societies back towards the more individualistic values that he believes characterised forager societies. On this basis, having argued that, for many ems, farmer-like values will once again be more favoured, he predicts that these ems will tend to prioritise self-sacrifice, patriotism and hard-work.

This general line of argument has a long pedigree, essentially following the Marxist principle that it is a society’s mode of production which determines the superstructure of its institutions and values, with a more recent gloss from evolutionary psychology. The specific farmer/forager dichotomy will seem problematic to many on empirical grounds, though. How do we know what forager values actually were? Very few forager societies survived in any form into historical times, that handful that did may have been influenced by surrounding farmer communities, and what we know about their values is mediated by the biases of the anthropologists and ethnographers that recorded them. Most of what we know about foragers and hunter-gatherers necessarily comes from archaeology, which unavoidably deals in the material remains of vanished cultures. The archaeological study of prehistoric mentalities is fascinating and current, but methodologically difficult. The early tendency was to argue on the basis of analogies with historical forager communities, now recognised to be problematic for the reasons we’ve just seen, while the nature of what remains to be studied naturally and inevitably biases archaeologists towards materialist explanations.

Even if one accepts a correlation between a society’s mode of production and the character of its predominant social institutions and values, it’s not at all clear in which direction causality runs. There’s a fashionable (and to me pretty convincing) line of argument from economists like Daron Acemoglu that the quality of a society’s institutions is a prime determinant of their economic success. Meanwhile a dominant strain of thinking about the origins of the historical transition to an industrial economy puts ideals and values ahead of materialist explanations such as the availability of fossil fuels. In the latter argument I’m personally much more in the materialist camp, but I find it difficult to reject the idea that the economic base of a society and its values and institutions must co-develop, rather than one simply being determined by the other.

If the empirical underpinnings of the forager/farmer polarity are dubious, its applicability to Hanson’s hypothetical future seems even more difficult to justify. The question that has to arise here is why one should believe that the opposition is strictly binary. There’ve been many different ways in which economies have been organised in the past – the slave economies of antiquity, feudal systems, nomadic pastoralism, capitalist industrial societies, state socialist economies, and so on – and it’s easy to argue that each has been accompanied by its own particular package of institutions and values. Given the massive scale of change Hanson is anticipating in his post-human economy, it’s difficult to see why we shouldn’t expect the emergence an entirely new package of values, which to us would probably seem very alien, rather than a reversion to a set of values supposed to be appropriate to some previous historical state.

So how should one read “The Age of Em” – what genre of writing should it be ascribed to? In my opinion it doesn’t succeed as a straight work of non-fiction; the technical underpinnings of its premise are not credible, and the social science bases of its speculations, interesting though they are, are not, to my mind, robust enough to sustain the weight of argument erected on them. On the other hand, it is clearly not by itself science fiction. It’s certainly an impressive exercise in world-building, which, with the addition of plot and character, would have the potential to make a spectacular series of novels.

But it occurs to me that the book might best be thought of as a Utopia, in the sense of Thomas More’s original. Stylistically, one can see the relationship, in the travelogue-like tone of the writing, dispassionate but not at all unsympathetic to the inhabitants of the strange world he’s describing. And there’s an ambiguity about what a reader might take to be the purpose of the exercise. What is described is a world which to some readers, perhaps, might seem admirable and enviable. It’s a world in which the vicissitudes and distractions of the flesh are absent, and as described by Hanson it’s a competitive world, meritocratic on the basis of pure intellect and character. Since the basic social unit consists of multiple emulations of a successful individual, readers who identify themselves with one of the “uploads” can imagine themselves surrounded by people just like them.

Or perhaps we should read it, as some have read More’s Utopia, as a satire on current society. What, we might ask, would a description of an economy completely decoupled from the needs and desires of flesh-and-blood human beings tell us about our world today?

Even more debate on transhumanism

Following on from my short e-book “Against Transhumanism: the delusion of technological transcendence” (available free for download: Against Transhumanism, v1.0, PDF 650 kB), I have a long interview on the Singularity Weblog available as a podcast or video – “Richard Jones on Against Transhumanism”.

To quote my interviewer, Nikola Danaylov, “During our 75 min discussion with Prof. Richard Jones we cover a variety of interesting topics such as: his general work in nanotechnology, his book and blog on the topic; whether technological progress is accelerating or not; transhumanism, Ray Kurzweil and technological determinism; physics, Platonism and Frank J. Tipler‘s claim that “the singularity is inevitable”; the strange ideological routes of transhumanism; Eric Drexler’s vision of nanotechnology as reducing the material world to software; the over-representation of physicists on both sides of the transhumanism and AI debate; mind uploading and the importance of molecules as the most fundamental units of biological processing; Aubrey de Grey‘s quest for indefinite life extension; the importance of ethics and politics…”

For an earlier round-up of other reactions to the e-book, see here.

How cheaper steel makes nights out more expensive (and why that’s a good thing)

If you were a well-to-do Londoner in mid-to-late-18th century London, 1 shilling and sixpence would buy you a decent seat for a night out at the opera. Alternatively, if you were a London craftsman – a cutler or a tool-maker – the same money would allow you to buy in a kilogram of the finest Sheffield steel, made by Benjamin Huntsman’s revolutionary new crucible process. A reasonable estimate of inflation since 1770 or so would put the current value of one and six at about ten pounds. I don’t get to go out in London very much, and in any case opera is far from my favourite entertainment, but I strongly suspect that £10 today would barely buy you a gin and tonic in the Covent Garden bar, let alone a seat in that historic opera house. A hundred pounds might be more like it as a minimum for a night at the London opera now – and for that money you could buy not one, but a hundred kilograms of high quality tool-steel (though more likely from China than Sheffield).

This illustrates a phenomenon first identified by the economist William Baumol – in an economy in which one sector (typically some branch of manufacturing) sees rapid productivity gains, while another sector (typically a service sector – such as entertainment in this example) does not, then the product of the sector with low productivity will see an increase in its real price. Continue reading “How cheaper steel makes nights out more expensive (and why that’s a good thing)”

Innovation, research and the UK’s productivity crisis

My article on the UK’s productivity slowdown has now been published as a Sheffield Political Economy Research Institute Paper, and is available for download here. Here is its introduction/summary:

The UK is in the midst of an unprecedented peacetime slowdown in productivity growth, which comes on top of the nation’s long-standing productivity weakness compared to the USA, France and Germany. If this trend continues, UK living standards will continue to stagnate and the government’s ambition to eliminate the deficit will fail. Productivity growth is connected with innovation, in its broadest sense, so it is natural to explore the connection between the UK’s poor productivity performance and the low R&D intensity of its economy. More careful analyses of productivity look at the performance of individual sectors and allow some more detailed explanations of the productivity slowdown to be tested. The decline of North Sea oil and gas and the end of the financial services bubble have a special role in the UK’s poor recent performance; these do not explain all the problem, but they will provide a headwind that the economy will have to overcome over the coming years. In response, the UK government will need to take a more active role in procuring and driving technological innovation, particularly in areas where such innovation is needed to meet the strategic goals of the state. We need a new political economy of technological innovation.

SPERI-Paper-28-Innovation-research-and-the-UK-productivity-crisis cover