How Sheffield became Steel City: what local history can teach us about innovation

As someone interested in the history of innovation, I take great pleasure in seeing the many tangible reminders of the industrial revolution that are to be found where I live and work, in North Derbyshire and Sheffield. I get the impression that academics are sometimes a little snooty about local history, seeing it as the domain of amateurs and enthusiasts. If so, this would be a pity, because a deeper understanding of the histories of particular places could be helpful in providing some tests of, and illustrations for, the grand theories that are the currency of academics. I’ve recently read the late David Hey’s excellent “History of Sheffield”, and this prompted these reflections on what we can learn about the history of innovation from the example of this city, which became so famous for its steel industries. What can we learn from the rise (and fall) of steel in Sheffield?

Specialisation

“Ther was no man, for peril, dorste hym touche.
A Sheffeld thwitel baar he in his hose.”

The Reeves Tale, Canterbury Tales, Chaucer.

When the Londoner Geoffrey Chaucer wrote these words, in the late 14th century, the reputation of Sheffield as a place that knives came from (Thwitel = whittle: a knife) was already established. As early as 1379, 25% of the population of Sheffield were listed as metal-workers. This was a degree of focus that was early, and well developed, but not completely exceptional – the development of medieval urban economies in response to widening patterns of trade was already leading to specialisation based on the particular advantages location or natural resources gave them[1]. Towns like Halifax and Salisbury (and many others) were developing clusters in textiles, while other towns found narrower niches, like Burton-on-Trent’s twin trades of religious statuary and beer. Burton’s seemingly odd combination arose from the local deposits of gypsum [2]; what was behind Sheffield’s choice of blades?

I don’t think the answer to this question is at all obvious. Continue reading “How Sheffield became Steel City: what local history can teach us about innovation”

Batteries and electric vehicles – disruption may come sooner than you think

How fast can electric cars take over from fossil fuelled vehicles? This partly depends on how quickly the world’s capacity for manufacturing batteries – especially the lithium-ion batteries that are currently the favoured technology for all-electric vehicles – can expand. The current world capacity for manufacturing the kind of batteries that power electric cars is 34 GWh, and, as has been widely publicised, Elon Musk plans to double this number, with Tesla’s giant battery factory currently under construction in Nevada. This joint venture with Japan’s Panasonic will bring another 35 GWh capacity on stream in the next few years. But, as a fascinating recent article in the FT makes clear (Electric cars: China’s battle for the battery market), Tesla isn’t the only player in this game. On the FT’s figures, by 2020, it’s expected that there will be a total of 174 GWh battery manufacturing capacity in the world – an increase of more than 500%. Of this, no less than 109 GWh will be in China.

What effect will this massive increase have on the markets? The demand for batteries – largely from electric vehicles – was for 11 GWh in 2015. Market penetration of electric vehicles is increasing, but it seems unlikely that demand will keep up with this huge increase in supply (one estimate projects demand in 2020 at 54 GWh). It seems inevitable that prices will fall in response to this coming glut – and batteries will end up being sold at less than the economically sustainable cost. The situation is reminiscent of what happened with silicon solar cells a few years ago – the same massive increase in manufacturing capacity, driven by China, resulting in big price falls – and the bankruptcy of many manufacturers.

This recent report (PDF) from the US’s National Renewable Energy Laboratory helpfully breaks down some of the input costs of manufacturing batteries. Costs are lower in China than the USA, but labour costs form a relatively small part of this. The two dominating costs, by far, are the materials and the cost of capital. China has the advantage in materials costs by being closer to the centre of the materials supply chains, which are based largely in Korea, Japan and China – this is where a substantial amount of the value is generated.

If the market price falls below the minimum sustainable price – as I think it must – most of the slack will be taken up by the cost of capital. Effectively, some of the huge capital costs going into these new plants will, one way or another, be written off – Tesla’s shareholders will lose even more money, and China’s opaque financial system will end up absorbing the losses. There will undoubtedly be manufacturing efficiencies to be found, and technical improvements in the materials, often arising from precise control of their nanostructure, will lead to improvements in the cost-effectiveness of the batteries. This will, in turn, accelerate the uptake of electric vehicles – possibly encouraged by strong policy steers in China especially.

Even at relatively low relative penetration of electric vehicles relative to the internal combustion energy, in plausible scenarios (see for example this analysis from Imperial College’s Grantham Centre) they may displace enough oil to have a material impact on total demand, and thus keep a lid on oil prices, perhaps even leading to a peak in oil demand as early as 2020. This will upend many of the assumptions currently being made by the oil companies.

But the dramatic fall in the cost of lithium-ion batteries that this manufacturing overcapacity will have other effects on the direction of technology development. It will create a strong force locking-in the technology of lithium-ion batteries – other types of battery will struggle to establish themselves in competition with this incumbent technology (as we have seen with alternatives to silicon photovoltaics), and technological improvements are most likely to be found in the kinds of material tweaks that can easily fit into the massive materials supply chains that are developing.

To be parochial, the UK government has just trailed funding for a national research centre for battery technology. Given the UK’s relatively small presence in this area, and its distance from the key supply chains for materials for batteries, it is going to need to be very careful to identify those places where the UK is going to be in a position to extract value. Mass manufacture of lithium ion batteries is probably not going to be one of those places.

Finally, why hasn’t John Goodenough (who has perhaps made the biggest contribution to the science of lithium-ion batteries in their current form) won the Nobel Prize for Chemistry yet?

Optimism – and realism – about solar energy

10 days ago I was fortunate enough to attend the Winton Symposium in Cambridge (where I’m currently spending some time as a visiting researcher in the Optoelectronics Group at the Cavendish Laboratory). The subject of the symposium was Harvesting the Energy of the Sun, and they had a stellar cast of international speakers addressing different aspects of the subject. This sums up some what I learnt from the day about the future potential for solar energy, together with some of my own reflections.

The growth of solar power – and the fall in its cost – over the last decade has been spectacular. Currently the world is producing about 10 billion standard 5 W silicon solar cells a year, at a current cost of €1.29 each; the unsubsidised cost of solar power in the sunnier parts of the world is heading down towards 5 cents a kWh, and at current capacity and demand levels, we should see 1 TW of solar power capacity in the world by 2030, compared to current estimates that installed capacity will reach about 300 GW at the end of this year (with 70 GW of that added in 2016).

But that’s not enough. The Paris Agreement – ratified so far by major emitters such as the USA, China, India, France and Germany (with the UK promising to ratify by the end of the year – but President-Elect Trump threatening to take the USA out) – commits countries to taking action to keep the average global temperature rise from pre-industrial times to below 2° C. Already the average temperature has risen by one degree or so, and currently the rate of increase is about 0.17° a decade. The point stressed by Sir David King was that it isn’t enough just to look at the consequences of the central prediction, worrying enough though they might be – one needs to insure against the very real risks of more extreme outcomes. What concerns governments in India and China, for example, is the risk of the successive failure of three rice harvests.

To achieve the Paris targets, the installed solar capacity we’re going to need by 2030 is estimated as being in the range 8-10 TW nominal; this would require 22-25% annual growth rate in manufacturing capacity. Continue reading “Optimism – and realism – about solar energy”

Is nuclear power obsolete?

After a summer hiccough, the new UK government has finally signed the deal with the French nuclear company EDF and its Chinese financial backers to build a new nuclear power station at Hinkley Point. My belief that this is a monumentally bad deal for the UK has not changed since I wrote about it three years ago, here: The UK’s nuclear new build: too expensive, too late.

The way the deal has been structured simultaneously maximises the cost to UK citizens while minimising the benefits that will accrue to UK industry. It’s the fallacy of the private finance initiative exposed by reductio ad absurdum; the government has signed up to a 35 year guarantee of excessively high prices for UK consumers, driven by the political desire to keep borrowing off the government’s balance sheet and maintain the fiction that nuclear power can be efficiently delivered by the private sector.

But there’s another argument against the Hinkley deal that I want to look at more critically – this is the idea that nuclear power is now obsolete, because with new technologies like wind, solar, electric cars and so on, we will, or soon will, be able to supply the 3.2 GW of low-carbon power that Hinkley promises at lower marginal cost. I think this marginal cost argument is profoundly wrong – given the need to make substantial progress decarbonising our energy system over the next thirty years, what’s important isn’t the marginal cost of the next GW of low-carbon power, it’s the total cost (and indeed feasibility) of replacing the 160 GW or so that represents our current fossil fuel based consumption (not to mention replacing the 9.5 GW existing nuclear capacity, fast approaching the end of its working lifetime).

To get a sense of the scale of the task, in 2015 the UK used about 2400 TWh of primary energy inputs. 83% of that was in the form of fossil fuels – roughly 800 TWh each of oil and gas, and a bit less than 300 TWh of coal. The 3.2 GW output of Hinkley would contribute 30 TWh pa at full capacity, while the combined output of all wind (onshore and offshore) and solar generation in 2015 was 48 TWh. So if we increased our solar and wind capacity by a bit more than half, we could replace Hinkley’s contribution; this is indeed probably doable, and given the stupidly expensive nature of the Hinkley deal, we might well indeed be able to do it more cheaply.

But that’s not all we need to do, not by a long way. If we are serious about decarbonising our energy supply (and we should be: for my reasons, please read this earlier post Climate change: what do we know for sure, and what is less certain?) we need to find, not 30 TWh a year, but more like 1500 TWh, of low carbon energy. It’s not one Hinkley Point we need, but 50 of them.

What can’t be stressed too often, in thinking about the UK’s energy supply, is that most of the energy we use (82% in 2015) is not in the form of electricity, but directly burnt oil and gas. Continue reading “Is nuclear power obsolete?”

Nobody knows anything (oil price edition)

Perhaps no single number is more important to the world economy than the price of oil. Modern economies depend on energy, and oil remains our largest energy source, supplying 31% of the world’s energy needs (another 21% comes from gas, whose price now moves quite closely with oil). And yet, huge movements in this number seemingly take experts by complete surprise.

OIl price predictions 2015
The price of oil in constant 2008 dollars, compared with the US Energy Information Authority predictions from 2000 and 2010. Data from the EIA.

My graph shows how the price of oil, corrected for inflation, has changed in the last 45 years. This is an updated version of the plot I blogged about five years ago; I included the set of predictions that the US Energy Information Administration had made in 2000. Just a few years later, these predictions were made nugatory by a large, unanticipated rise in oil prices. The predictions the EIA made ten years later, in 2010, had learnt one lesson – they included a much bigger spread between the high and low contingencies, amounting to more than a factor of three by the end of the decade. Now, only halfway into the period of the prediction, we see that the way oil prices turned out has so far managed both to exceed the high prediction and to undershoot the low one.

These gyrations mean that views that were conventional wisdom just a couple of years ago have to be rethought. Continue reading “Nobody knows anything (oil price edition)”

England’s early energy transition to fossil fuels: driven by process heat, not steam engines

Was the industrial revolution an energy revolution, in which the energy constraints of a traditional economy based on the power of the sun were broken by the discovery and exploitation of fossil fuel? Or was it an ideological revolution, in which the power of free thinking and free markets unlocked human ingenuity to power a growth in prosperity without limits? Those symbols of the industrial revolution – the steam engine, the coke-fuelled blast furnace – suggest the former, but the trend now amongst some economic historians is to downplay the role of coal and steam. What I think is correct is that the industrial revolution had already gathered much momentum before the steam engine made a significant impact. But coal was central to driving that early momentum; its use was already growing rapidly, but the dominant use of that coal was as a source of heat energy in a whole variety of industrial processes, not as a source of mechanical power. The foundations of the industrial revolution were laid in the diversity and productivity of those industries propelled by coal-fuelled process heat: the steam engine was the last thing that coal did for the industrial revolution, not the first.

What’s apparent, and perhaps surprising, from a plot of the relative contributions of coal and firewood to England’s energy economy, is how early in history the transition from biomass to fossil fuels took place. Using estimates quoted by Wrigley (a compelling advocate of the energy revolution position), we see that coal use in England grew roughly exponentially (with an annual growth rate of around 1.7%) between 1560 and 1800. The crossover between firewood and coal happened in the early seventeenth century, a date which is by world standards very early – for the world as a whole, Smil estimates this crossover only happened in the late 19th century.

coal_vs_firewood

Estimated consumption of coal and biomass fuels in England and Wales; data from Wrigley – Energy and the English Industrial Revolution.

So why did coal use become so important so early in England? Continue reading “England’s early energy transition to fossil fuels: driven by process heat, not steam engines”

Lecture on responsible innovation and the irresponsibility of not innovating

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.

What’s the best way of harvesting the energy of the sun?

This is another post inspired by my current first year physics course, The Physics of Sustainable Energy (PHY123). Calculations are all rough, order of magnitude estimates – if you don’t believe them, try doing them for yourself.

We could get all the energy we need from the sun, in principle. Even from our cloudy UK skies an average of 100 W arrives at the surface per square meter. Each person in the UK uses energy at an average rate of 3.4 kW, so if we each could harvest the sun from a mere 34 square meters with 100% efficiency, that would do the job. For all 63 million of us, that’s just a bit more than 2,000 square kilometres out of the UK’s total area of 242,900 km2 – less than 1%. What would it take to turn that “in principle” into “in practise”? Here are the problems we have to overcome, in some combination: we need higher efficiencies (to reduce the land area needed), lower costs, the ability to deploy at scale and the ability to store the energy for when the sun isn’t shining.

There are at least four different technological approaches we could use. The most traditional is to use the ability of plants to convert the sun’s energy into fuel molecules; this is cheap, deployable at scale, and provides the energy in easily storable form, but it’s not very efficient and so needs a lot of land. The most technologically sophisticated is the solar cell. These achieve high efficiencies (though still not generally more than about 20-25%), but they cost too much, they are only available at scales that are still orders of magnitude too small, and produce energy in the hard-to-store form of electricity. Other methods include concentrating the sun’s rays to the extent that they can be used to heat up a working fluid directly, a technology already in use in sunny places like California and Spain, while in the future, the prospect of copying nature by using sunshine to synthesise fuel molecules directly – solar fuels – is attractive. How do these technologies compare and what are their future prospects?

We can get a useful baseline by thinking about the most traditional of these technologies – growing firewood. Continue reading “What’s the best way of harvesting the energy of the sun?”

What should we do about climate change? Two opposing views, and they’re both wrong

In the last 250 years, humanity has become completely dependent on fossil fuel energy. This dependence on fossil fuels has materially changed our climate; these changes will continue and intensify in the future. While uncertainty remains about the future extent and consequences of climate change, there is no uncertainty about the causal link between burning fossil fuel, increasing carbon dioxide concentrations in the atmosphere, and a warming world. This summarises my previous two long posts, about the history of our fossil fuel dependence, and the underlying physics of climate change. What should we do about it? From two ends of the political spectrum, there are two views, and I think they are both wrong.

For the environmental movement, the only thing that stops us moving to a sustainable energy economy right away is a lack of political will. Opposing the “environmentalists” are free-market loving “realists” who (sometimes) accept the reality of human-induced climate change, but balk at the costs of current renewable energy. For them, the correct course of action is to do nothing now (except, perhaps, for some shift from coal to gas), but wait for better technology to come along before making significant moves to address climate change.

The “environmentalists” are right about the urgency of the problem, but they underestimate the degree to which society currently depends on cheap energy, and they overestimate the capacity of current renewable energy technologies to provide cheap enough energy at scale. The “realists”, on the hand, are right about the degree of our dependence on cheap energy, and on the shortcomings of current renewable technologies. But they underplay the risks of climate change, and their neglect of the small but significant chance of much worse outcomes than the consensus forecasts takes wishful thinking to the point of recklessness.

But the biggest failure of the “realists” is that they don’t appreciate how slowly innovation in energy technology is currently proceeding. This arises from two errors. Firstly, there’s a tendency to believe that technology is a single thing that is accelerating at a uniform rate, so that from the very visible rapid rate of innovation in information and communication technologies we can conclude that new energy technologies will be developed similarly quickly. But this is a mistake: innovation in the realm of materials, of the kind that’s needed for new energy technologies, is much more difficult, slower and takes more resources than innovation in the realm of information. While we have accelerating innovation in some domains, in others we have innovation stagnation. Related to this is the second error, which is to imagine that progress in technology happens autonomously;given a need, a technology will automatically emerge to meet that need. But developing new large-scale material technologies needs resources and a collective will, and recently the will to deploy those resources at the necessary scale has been lacking. There’s been a worldwide collapse in energy R&D over the last thirty years; to develop the new technologies we need we will need not only to reverse this collapse but make up the lost ground.

So I agree with the “environmentalists” on the urgency of the problem, and with the “realists” about the need for new technology. But the “realists” need to get realistic about what it will take to develop that new technology.

Climate change: what do we know for sure, and what is less certain?

In another post inspired by my current first year physics course, The Physics of Sustainable Energy (PHY123), I suggest how a physicist might think about climate change.

The question of climate change is going up the political agenda again; in the UK recent floods have once again raised the question of whether recent extreme weather can be directly attributed to human-created climate change, or whether such events are likely to be more frequent in the future as a result of continuing human induced global warming. One UK Energy Minister – Michael Fallon – described the climate change argument as “theology” in this interview. Of course, theology is exactly what it’s not. It’s science, based on theory, observation and modelling; some of the issues are very well understood, and some remain more uncertain. There’s an enormous amount of material in the 1536 pages of the IPCC’s 5th assessment report (available here). But how should we navigate these very complex arguments in a way which makes clear what we know for sure, and what remains uncertain? Here’s my suggestion for a route-map.

My last post talked about how, after 1750 or so, we became dependent on fossil fuels. Since that time we have collectively burned about 375 gigatonnes of carbon – what has the effect of burning all that carbon been on the environment? The straightforward answer to that is that there is now a lot more carbon dioxide in the atmosphere than there was in pre-industrial times. For the thousand years before the industrial revolution, the carbon dioxide content of the atmosphere was roughly constant at around 280 parts per million. Since the 19th century it has been significantly increasing; it’s currently just a couple of ppm short of 400, and is still increasing by about 2 ppm per year.

This 40% increase in carbon dioxide concentration is not in doubt. But how can we be sure it’s associated with burning fossil fuels? Continue reading “Climate change: what do we know for sure, and what is less certain?”